00:00:00.001 Started by upstream project "autotest-per-patch" build number 131225 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.047 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.048 The recommended git tool is: git 00:00:00.048 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.088 Fetching changes from the remote Git repository 00:00:00.091 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.194 > git --version # 'git version 2.39.2' 00:00:00.194 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.087 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.099 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.110 Checking out Revision 3f5fbcceba25866ebf7e22fd0e5d30548272f62c (FETCH_HEAD) 00:00:04.110 > git config core.sparsecheckout # timeout=10 00:00:04.120 > git read-tree -mu HEAD # timeout=10 00:00:04.135 > git checkout -f 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=5 00:00:04.151 Commit message: "packer: Bump java's version" 00:00:04.151 > git rev-list --no-walk 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=10 00:00:04.234 [Pipeline] Start of Pipeline 00:00:04.245 [Pipeline] library 00:00:04.246 Loading library shm_lib@master 00:00:04.246 Library shm_lib@master is cached. Copying from home. 00:00:04.258 [Pipeline] node 00:00:04.267 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.269 [Pipeline] { 00:00:04.278 [Pipeline] catchError 00:00:04.279 [Pipeline] { 00:00:04.289 [Pipeline] wrap 00:00:04.296 [Pipeline] { 00:00:04.303 [Pipeline] stage 00:00:04.304 [Pipeline] { (Prologue) 00:00:04.502 [Pipeline] sh 00:00:04.791 + logger -p user.info -t JENKINS-CI 00:00:04.809 [Pipeline] echo 00:00:04.811 Node: CYP9 00:00:04.817 [Pipeline] sh 00:00:05.121 [Pipeline] setCustomBuildProperty 00:00:05.133 [Pipeline] echo 00:00:05.134 Cleanup processes 00:00:05.139 [Pipeline] sh 00:00:05.426 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.426 2798350 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.442 [Pipeline] sh 00:00:05.735 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.735 ++ grep -v 'sudo pgrep' 00:00:05.735 ++ awk '{print $1}' 00:00:05.735 + sudo kill -9 00:00:05.735 + true 00:00:05.750 [Pipeline] cleanWs 00:00:05.760 [WS-CLEANUP] Deleting project workspace... 00:00:05.760 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.766 [WS-CLEANUP] done 00:00:05.771 [Pipeline] setCustomBuildProperty 00:00:05.787 [Pipeline] sh 00:00:06.167 + sudo git config --global --replace-all safe.directory '*' 00:00:06.255 [Pipeline] httpRequest 00:00:06.660 [Pipeline] echo 00:00:06.661 Sorcerer 10.211.164.101 is alive 00:00:06.671 [Pipeline] retry 00:00:06.673 [Pipeline] { 00:00:06.687 [Pipeline] httpRequest 00:00:06.692 HttpMethod: GET 00:00:06.692 URL: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:06.693 Sending request to url: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:06.709 Response Code: HTTP/1.1 200 OK 00:00:06.710 Success: Status code 200 is in the accepted range: 200,404 00:00:06.710 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:29.514 [Pipeline] } 00:00:29.532 [Pipeline] // retry 00:00:29.540 [Pipeline] sh 00:00:29.829 + tar --no-same-owner -xf jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:29.847 [Pipeline] httpRequest 00:00:30.291 [Pipeline] echo 00:00:30.293 Sorcerer 10.211.164.101 is alive 00:00:30.302 [Pipeline] retry 00:00:30.304 [Pipeline] { 00:00:30.318 [Pipeline] httpRequest 00:00:30.322 HttpMethod: GET 00:00:30.323 URL: http://10.211.164.101/packages/spdk_70fd76b04282ed738b3e8c9bc7be432041ec6b27.tar.gz 00:00:30.323 Sending request to url: http://10.211.164.101/packages/spdk_70fd76b04282ed738b3e8c9bc7be432041ec6b27.tar.gz 00:00:30.332 Response Code: HTTP/1.1 200 OK 00:00:30.332 Success: Status code 200 is in the accepted range: 200,404 00:00:30.333 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70fd76b04282ed738b3e8c9bc7be432041ec6b27.tar.gz 00:01:11.811 [Pipeline] } 00:01:11.829 [Pipeline] // retry 00:01:11.837 [Pipeline] sh 00:01:12.127 + tar --no-same-owner -xf spdk_70fd76b04282ed738b3e8c9bc7be432041ec6b27.tar.gz 00:01:15.442 [Pipeline] sh 00:01:15.731 + git -C spdk log --oneline -n5 00:01:15.731 70fd76b04 bdev/nvme: Fix crash due to NULL io_path 00:01:15.731 3a02df0b1 event: add new 'mappings' parameter to static scheduler 00:01:15.731 118c273ab event: enable changing back to static scheduler 00:01:15.731 7e6d8079b lib/fuse_dispatcher: destruction sequence fixed 00:01:15.731 8dce86055 module/vfu_device/vfu_virtio_fs: EP destruction fixed 00:01:15.743 [Pipeline] } 00:01:15.759 [Pipeline] // stage 00:01:15.768 [Pipeline] stage 00:01:15.770 [Pipeline] { (Prepare) 00:01:15.787 [Pipeline] writeFile 00:01:15.802 [Pipeline] sh 00:01:16.087 + logger -p user.info -t JENKINS-CI 00:01:16.101 [Pipeline] sh 00:01:16.423 + logger -p user.info -t JENKINS-CI 00:01:16.434 [Pipeline] sh 00:01:16.720 + cat autorun-spdk.conf 00:01:16.720 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.720 SPDK_TEST_NVMF=1 00:01:16.720 SPDK_TEST_NVME_CLI=1 00:01:16.720 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.720 SPDK_TEST_NVMF_NICS=e810 00:01:16.720 SPDK_TEST_VFIOUSER=1 00:01:16.720 SPDK_RUN_UBSAN=1 00:01:16.720 NET_TYPE=phy 00:01:16.727 RUN_NIGHTLY=0 00:01:16.731 [Pipeline] readFile 00:01:16.753 [Pipeline] withEnv 00:01:16.755 [Pipeline] { 00:01:16.764 [Pipeline] sh 00:01:17.047 + set -ex 00:01:17.047 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:17.047 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.047 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.047 ++ SPDK_TEST_NVMF=1 00:01:17.047 ++ SPDK_TEST_NVME_CLI=1 00:01:17.047 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.047 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.047 ++ SPDK_TEST_VFIOUSER=1 00:01:17.047 ++ SPDK_RUN_UBSAN=1 00:01:17.047 ++ NET_TYPE=phy 00:01:17.047 ++ RUN_NIGHTLY=0 00:01:17.047 + case $SPDK_TEST_NVMF_NICS in 00:01:17.047 + DRIVERS=ice 00:01:17.047 + [[ tcp == \r\d\m\a ]] 00:01:17.047 + [[ -n ice ]] 00:01:17.047 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.047 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.047 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:17.047 rmmod: ERROR: Module irdma is not currently loaded 00:01:17.047 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.047 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.047 + true 00:01:17.047 + for D in $DRIVERS 00:01:17.047 + sudo modprobe ice 00:01:17.047 + exit 0 00:01:17.055 [Pipeline] } 00:01:17.067 [Pipeline] // withEnv 00:01:17.071 [Pipeline] } 00:01:17.083 [Pipeline] // stage 00:01:17.092 [Pipeline] catchError 00:01:17.093 [Pipeline] { 00:01:17.106 [Pipeline] timeout 00:01:17.106 Timeout set to expire in 1 hr 0 min 00:01:17.107 [Pipeline] { 00:01:17.119 [Pipeline] stage 00:01:17.120 [Pipeline] { (Tests) 00:01:17.134 [Pipeline] sh 00:01:17.420 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.420 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.420 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.420 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:17.420 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.420 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.420 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:17.420 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.420 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.420 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.420 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:17.420 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.420 + source /etc/os-release 00:01:17.420 ++ NAME='Fedora Linux' 00:01:17.420 ++ VERSION='39 (Cloud Edition)' 00:01:17.420 ++ ID=fedora 00:01:17.420 ++ VERSION_ID=39 00:01:17.420 ++ VERSION_CODENAME= 00:01:17.420 ++ PLATFORM_ID=platform:f39 00:01:17.420 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:17.420 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.420 ++ LOGO=fedora-logo-icon 00:01:17.420 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:17.420 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.420 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:17.420 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.420 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.420 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.420 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:17.420 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.420 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:17.420 ++ SUPPORT_END=2024-11-12 00:01:17.420 ++ VARIANT='Cloud Edition' 00:01:17.420 ++ VARIANT_ID=cloud 00:01:17.420 + uname -a 00:01:17.420 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:17.420 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:20.719 Hugepages 00:01:20.719 node hugesize free / total 00:01:20.719 node0 1048576kB 0 / 0 00:01:20.719 node0 2048kB 0 / 0 00:01:20.719 node1 1048576kB 0 / 0 00:01:20.719 node1 2048kB 0 / 0 00:01:20.719 00:01:20.719 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:20.719 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:20.719 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:20.719 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:20.719 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:20.719 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:20.719 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:20.719 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:20.719 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:20.719 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:20.719 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:20.719 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:20.719 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:20.719 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:20.719 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:20.719 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:20.719 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:20.719 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:20.719 + rm -f /tmp/spdk-ld-path 00:01:20.719 + source autorun-spdk.conf 00:01:20.719 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.719 ++ SPDK_TEST_NVMF=1 00:01:20.719 ++ SPDK_TEST_NVME_CLI=1 00:01:20.719 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.719 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.719 ++ SPDK_TEST_VFIOUSER=1 00:01:20.719 ++ SPDK_RUN_UBSAN=1 00:01:20.719 ++ NET_TYPE=phy 00:01:20.719 ++ RUN_NIGHTLY=0 00:01:20.719 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.719 + [[ -n '' ]] 00:01:20.719 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.719 + for M in /var/spdk/build-*-manifest.txt 00:01:20.719 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:20.719 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:20.719 + for M in /var/spdk/build-*-manifest.txt 00:01:20.719 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:20.719 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:20.719 + for M in /var/spdk/build-*-manifest.txt 00:01:20.719 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:20.719 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:20.719 ++ uname 00:01:20.719 + [[ Linux == \L\i\n\u\x ]] 00:01:20.719 + sudo dmesg -T 00:01:20.719 + sudo dmesg --clear 00:01:20.719 + dmesg_pid=2799328 00:01:20.719 + [[ Fedora Linux == FreeBSD ]] 00:01:20.719 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.719 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.719 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:20.719 + [[ -x /usr/src/fio-static/fio ]] 00:01:20.719 + export FIO_BIN=/usr/src/fio-static/fio 00:01:20.719 + FIO_BIN=/usr/src/fio-static/fio 00:01:20.719 + sudo dmesg -Tw 00:01:20.719 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:20.719 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:20.719 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:20.719 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.719 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.719 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:20.719 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.719 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.719 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:20.719 Test configuration: 00:01:20.719 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.719 SPDK_TEST_NVMF=1 00:01:20.719 SPDK_TEST_NVME_CLI=1 00:01:20.719 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.719 SPDK_TEST_NVMF_NICS=e810 00:01:20.719 SPDK_TEST_VFIOUSER=1 00:01:20.719 SPDK_RUN_UBSAN=1 00:01:20.719 NET_TYPE=phy 00:01:20.980 RUN_NIGHTLY=0 06:44:20 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:20.980 06:44:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:20.980 06:44:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:20.980 06:44:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:20.980 06:44:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:20.980 06:44:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:20.980 06:44:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.980 06:44:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.980 06:44:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.980 06:44:20 -- paths/export.sh@5 -- $ export PATH 00:01:20.980 06:44:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.980 06:44:20 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:20.980 06:44:20 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:20.980 06:44:20 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729053860.XXXXXX 00:01:20.980 06:44:20 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729053860.nBobYk 00:01:20.980 06:44:20 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:20.980 06:44:20 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:20.980 06:44:20 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:20.980 06:44:20 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:20.980 06:44:20 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:20.980 06:44:20 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:20.980 06:44:20 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:20.980 06:44:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.980 06:44:20 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:20.980 06:44:20 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:20.980 06:44:20 -- pm/common@17 -- $ local monitor 00:01:20.980 06:44:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.980 06:44:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.980 06:44:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.980 06:44:20 -- pm/common@21 -- $ date +%s 00:01:20.980 06:44:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.980 06:44:20 -- pm/common@25 -- $ sleep 1 00:01:20.980 06:44:20 -- pm/common@21 -- $ date +%s 00:01:20.981 06:44:20 -- pm/common@21 -- $ date +%s 00:01:20.981 06:44:20 -- pm/common@21 -- $ date +%s 00:01:20.981 06:44:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729053860 00:01:20.981 06:44:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729053860 00:01:20.981 06:44:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729053860 00:01:20.981 06:44:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729053860 00:01:20.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729053860_collect-cpu-load.pm.log 00:01:20.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729053860_collect-vmstat.pm.log 00:01:20.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729053860_collect-cpu-temp.pm.log 00:01:20.981 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729053860_collect-bmc-pm.bmc.pm.log 00:01:21.922 06:44:21 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:21.922 06:44:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.922 06:44:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.922 06:44:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.922 06:44:21 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.922 Wed Oct 16 04:44:21 AM UTC 2024 00:01:21.922 06:44:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.922 v25.01-pre-65-g70fd76b04 00:01:21.922 06:44:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.922 06:44:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.922 06:44:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.922 06:44:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:21.922 06:44:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:21.922 06:44:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.922 ************************************ 00:01:21.922 START TEST ubsan 00:01:21.922 ************************************ 00:01:21.922 06:44:21 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:21.922 using ubsan 00:01:21.922 00:01:21.922 real 0m0.001s 00:01:21.922 user 0m0.001s 00:01:21.922 sys 0m0.000s 00:01:21.922 06:44:21 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:21.922 06:44:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.922 ************************************ 00:01:21.922 END TEST ubsan 00:01:21.922 ************************************ 00:01:21.922 06:44:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.922 06:44:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.922 06:44:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.922 06:44:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.922 06:44:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.922 06:44:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.922 06:44:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.922 06:44:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.922 06:44:21 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:22.182 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:22.182 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:22.752 Using 'verbs' RDMA provider 00:01:38.226 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:53.135 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:53.135 Creating mk/config.mk...done. 00:01:53.135 Creating mk/cc.flags.mk...done. 00:01:53.135 Type 'make' to build. 00:01:53.135 06:44:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:53.135 06:44:50 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:53.135 06:44:50 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:53.135 06:44:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.135 ************************************ 00:01:53.135 START TEST make 00:01:53.135 ************************************ 00:01:53.135 06:44:50 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:53.135 make[1]: Nothing to be done for 'all'. 00:01:53.135 The Meson build system 00:01:53.135 Version: 1.5.0 00:01:53.135 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:53.135 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:53.135 Build type: native build 00:01:53.135 Project name: libvfio-user 00:01:53.135 Project version: 0.0.1 00:01:53.135 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:53.135 C linker for the host machine: cc ld.bfd 2.40-14 00:01:53.135 Host machine cpu family: x86_64 00:01:53.135 Host machine cpu: x86_64 00:01:53.135 Run-time dependency threads found: YES 00:01:53.135 Library dl found: YES 00:01:53.135 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:53.135 Run-time dependency json-c found: YES 0.17 00:01:53.135 Run-time dependency cmocka found: YES 1.1.7 00:01:53.135 Program pytest-3 found: NO 00:01:53.135 Program flake8 found: NO 00:01:53.135 Program misspell-fixer found: NO 00:01:53.135 Program restructuredtext-lint found: NO 00:01:53.135 Program valgrind found: YES (/usr/bin/valgrind) 00:01:53.135 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.135 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.135 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.135 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:53.135 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:53.135 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:53.135 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:53.135 Build targets in project: 8 00:01:53.135 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:53.135 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:53.135 00:01:53.135 libvfio-user 0.0.1 00:01:53.135 00:01:53.135 User defined options 00:01:53.135 buildtype : debug 00:01:53.135 default_library: shared 00:01:53.135 libdir : /usr/local/lib 00:01:53.135 00:01:53.135 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.707 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:53.707 [1/37] Compiling C object samples/null.p/null.c.o 00:01:53.707 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:53.707 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:53.707 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:53.707 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:53.707 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:53.707 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:53.707 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:53.707 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:53.707 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:53.707 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:53.707 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:53.707 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:53.707 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:53.707 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:53.708 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:53.708 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:53.708 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:53.708 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:53.969 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:53.969 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:53.969 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:53.969 [23/37] Compiling C object samples/server.p/server.c.o 00:01:53.969 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:53.969 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:53.969 [26/37] Compiling C object samples/client.p/client.c.o 00:01:53.969 [27/37] Linking target samples/client 00:01:53.969 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:53.969 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:53.969 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:53.969 [31/37] Linking target test/unit_tests 00:01:54.231 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:54.231 [33/37] Linking target samples/lspci 00:01:54.231 [34/37] Linking target samples/server 00:01:54.231 [35/37] Linking target samples/null 00:01:54.231 [36/37] Linking target samples/gpio-pci-idio-16 00:01:54.231 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:54.231 INFO: autodetecting backend as ninja 00:01:54.231 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.231 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.803 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:54.803 ninja: no work to do. 00:02:00.096 The Meson build system 00:02:00.096 Version: 1.5.0 00:02:00.096 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:00.096 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:00.096 Build type: native build 00:02:00.096 Program cat found: YES (/usr/bin/cat) 00:02:00.096 Project name: DPDK 00:02:00.096 Project version: 24.03.0 00:02:00.096 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:00.096 C linker for the host machine: cc ld.bfd 2.40-14 00:02:00.096 Host machine cpu family: x86_64 00:02:00.096 Host machine cpu: x86_64 00:02:00.096 Message: ## Building in Developer Mode ## 00:02:00.096 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.096 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.097 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.097 Program python3 found: YES (/usr/bin/python3) 00:02:00.097 Program cat found: YES (/usr/bin/cat) 00:02:00.097 Compiler for C supports arguments -march=native: YES 00:02:00.097 Checking for size of "void *" : 8 00:02:00.097 Checking for size of "void *" : 8 (cached) 00:02:00.097 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:00.097 Library m found: YES 00:02:00.097 Library numa found: YES 00:02:00.097 Has header "numaif.h" : YES 00:02:00.097 Library fdt found: NO 00:02:00.097 Library execinfo found: NO 00:02:00.097 Has header "execinfo.h" : YES 00:02:00.097 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:00.097 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.097 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.097 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.097 Run-time dependency openssl found: YES 3.1.1 00:02:00.097 Run-time dependency libpcap found: YES 1.10.4 00:02:00.097 Has header "pcap.h" with dependency libpcap: YES 00:02:00.097 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.097 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.097 Compiler for C supports arguments -Wformat: YES 00:02:00.097 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.097 Compiler for C supports arguments -Wformat-security: NO 00:02:00.097 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.097 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.097 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.097 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.097 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.097 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.097 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.097 Compiler for C supports arguments -Wundef: YES 00:02:00.097 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.097 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.097 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.097 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.097 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.097 Program objdump found: YES (/usr/bin/objdump) 00:02:00.097 Compiler for C supports arguments -mavx512f: YES 00:02:00.097 Checking if "AVX512 checking" compiles: YES 00:02:00.097 Fetching value of define "__SSE4_2__" : 1 00:02:00.097 Fetching value of define "__AES__" : 1 00:02:00.097 Fetching value of define "__AVX__" : 1 00:02:00.097 Fetching value of define "__AVX2__" : 1 00:02:00.097 Fetching value of define "__AVX512BW__" : 1 00:02:00.097 Fetching value of define "__AVX512CD__" : 1 00:02:00.097 Fetching value of define "__AVX512DQ__" : 1 00:02:00.097 Fetching value of define "__AVX512F__" : 1 00:02:00.097 Fetching value of define "__AVX512VL__" : 1 00:02:00.097 Fetching value of define "__PCLMUL__" : 1 00:02:00.097 Fetching value of define "__RDRND__" : 1 00:02:00.097 Fetching value of define "__RDSEED__" : 1 00:02:00.097 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:00.097 Fetching value of define "__znver1__" : (undefined) 00:02:00.097 Fetching value of define "__znver2__" : (undefined) 00:02:00.097 Fetching value of define "__znver3__" : (undefined) 00:02:00.097 Fetching value of define "__znver4__" : (undefined) 00:02:00.097 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.097 Message: lib/log: Defining dependency "log" 00:02:00.097 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.097 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.097 Checking for function "getentropy" : NO 00:02:00.097 Message: lib/eal: Defining dependency "eal" 00:02:00.097 Message: lib/ring: Defining dependency "ring" 00:02:00.097 Message: lib/rcu: Defining dependency "rcu" 00:02:00.097 Message: lib/mempool: Defining dependency "mempool" 00:02:00.097 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.097 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.097 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.097 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.097 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.097 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.097 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:00.097 Compiler for C supports arguments -mpclmul: YES 00:02:00.097 Compiler for C supports arguments -maes: YES 00:02:00.097 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.097 Compiler for C supports arguments -mavx512bw: YES 00:02:00.097 Compiler for C supports arguments -mavx512dq: YES 00:02:00.097 Compiler for C supports arguments -mavx512vl: YES 00:02:00.097 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.097 Compiler for C supports arguments -mavx2: YES 00:02:00.097 Compiler for C supports arguments -mavx: YES 00:02:00.097 Message: lib/net: Defining dependency "net" 00:02:00.097 Message: lib/meter: Defining dependency "meter" 00:02:00.097 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.097 Message: lib/pci: Defining dependency "pci" 00:02:00.097 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.097 Message: lib/hash: Defining dependency "hash" 00:02:00.097 Message: lib/timer: Defining dependency "timer" 00:02:00.097 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.097 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.097 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.097 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.097 Message: lib/power: Defining dependency "power" 00:02:00.097 Message: lib/reorder: Defining dependency "reorder" 00:02:00.097 Message: lib/security: Defining dependency "security" 00:02:00.097 Has header "linux/userfaultfd.h" : YES 00:02:00.097 Has header "linux/vduse.h" : YES 00:02:00.097 Message: lib/vhost: Defining dependency "vhost" 00:02:00.097 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.097 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.097 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.097 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.097 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.097 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.097 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.097 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.097 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.097 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.097 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:00.097 Configuring doxy-api-html.conf using configuration 00:02:00.097 Configuring doxy-api-man.conf using configuration 00:02:00.097 Program mandb found: YES (/usr/bin/mandb) 00:02:00.097 Program sphinx-build found: NO 00:02:00.097 Configuring rte_build_config.h using configuration 00:02:00.097 Message: 00:02:00.097 ================= 00:02:00.097 Applications Enabled 00:02:00.097 ================= 00:02:00.097 00:02:00.097 apps: 00:02:00.097 00:02:00.097 00:02:00.097 Message: 00:02:00.097 ================= 00:02:00.097 Libraries Enabled 00:02:00.097 ================= 00:02:00.097 00:02:00.097 libs: 00:02:00.097 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.097 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.097 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.097 00:02:00.097 Message: 00:02:00.097 =============== 00:02:00.097 Drivers Enabled 00:02:00.097 =============== 00:02:00.097 00:02:00.097 common: 00:02:00.097 00:02:00.097 bus: 00:02:00.097 pci, vdev, 00:02:00.097 mempool: 00:02:00.097 ring, 00:02:00.097 dma: 00:02:00.097 00:02:00.097 net: 00:02:00.097 00:02:00.097 crypto: 00:02:00.097 00:02:00.097 compress: 00:02:00.097 00:02:00.097 vdpa: 00:02:00.097 00:02:00.097 00:02:00.097 Message: 00:02:00.097 ================= 00:02:00.097 Content Skipped 00:02:00.097 ================= 00:02:00.097 00:02:00.097 apps: 00:02:00.097 dumpcap: explicitly disabled via build config 00:02:00.097 graph: explicitly disabled via build config 00:02:00.097 pdump: explicitly disabled via build config 00:02:00.097 proc-info: explicitly disabled via build config 00:02:00.097 test-acl: explicitly disabled via build config 00:02:00.097 test-bbdev: explicitly disabled via build config 00:02:00.097 test-cmdline: explicitly disabled via build config 00:02:00.097 test-compress-perf: explicitly disabled via build config 00:02:00.097 test-crypto-perf: explicitly disabled via build config 00:02:00.097 test-dma-perf: explicitly disabled via build config 00:02:00.097 test-eventdev: explicitly disabled via build config 00:02:00.097 test-fib: explicitly disabled via build config 00:02:00.097 test-flow-perf: explicitly disabled via build config 00:02:00.097 test-gpudev: explicitly disabled via build config 00:02:00.097 test-mldev: explicitly disabled via build config 00:02:00.097 test-pipeline: explicitly disabled via build config 00:02:00.097 test-pmd: explicitly disabled via build config 00:02:00.097 test-regex: explicitly disabled via build config 00:02:00.097 test-sad: explicitly disabled via build config 00:02:00.097 test-security-perf: explicitly disabled via build config 00:02:00.097 00:02:00.097 libs: 00:02:00.097 argparse: explicitly disabled via build config 00:02:00.097 metrics: explicitly disabled via build config 00:02:00.097 acl: explicitly disabled via build config 00:02:00.097 bbdev: explicitly disabled via build config 00:02:00.097 bitratestats: explicitly disabled via build config 00:02:00.097 bpf: explicitly disabled via build config 00:02:00.097 cfgfile: explicitly disabled via build config 00:02:00.097 distributor: explicitly disabled via build config 00:02:00.097 efd: explicitly disabled via build config 00:02:00.097 eventdev: explicitly disabled via build config 00:02:00.097 dispatcher: explicitly disabled via build config 00:02:00.097 gpudev: explicitly disabled via build config 00:02:00.097 gro: explicitly disabled via build config 00:02:00.097 gso: explicitly disabled via build config 00:02:00.097 ip_frag: explicitly disabled via build config 00:02:00.097 jobstats: explicitly disabled via build config 00:02:00.097 latencystats: explicitly disabled via build config 00:02:00.097 lpm: explicitly disabled via build config 00:02:00.097 member: explicitly disabled via build config 00:02:00.098 pcapng: explicitly disabled via build config 00:02:00.098 rawdev: explicitly disabled via build config 00:02:00.098 regexdev: explicitly disabled via build config 00:02:00.098 mldev: explicitly disabled via build config 00:02:00.098 rib: explicitly disabled via build config 00:02:00.098 sched: explicitly disabled via build config 00:02:00.098 stack: explicitly disabled via build config 00:02:00.098 ipsec: explicitly disabled via build config 00:02:00.098 pdcp: explicitly disabled via build config 00:02:00.098 fib: explicitly disabled via build config 00:02:00.098 port: explicitly disabled via build config 00:02:00.098 pdump: explicitly disabled via build config 00:02:00.098 table: explicitly disabled via build config 00:02:00.098 pipeline: explicitly disabled via build config 00:02:00.098 graph: explicitly disabled via build config 00:02:00.098 node: explicitly disabled via build config 00:02:00.098 00:02:00.098 drivers: 00:02:00.098 common/cpt: not in enabled drivers build config 00:02:00.098 common/dpaax: not in enabled drivers build config 00:02:00.098 common/iavf: not in enabled drivers build config 00:02:00.098 common/idpf: not in enabled drivers build config 00:02:00.098 common/ionic: not in enabled drivers build config 00:02:00.098 common/mvep: not in enabled drivers build config 00:02:00.098 common/octeontx: not in enabled drivers build config 00:02:00.098 bus/auxiliary: not in enabled drivers build config 00:02:00.098 bus/cdx: not in enabled drivers build config 00:02:00.098 bus/dpaa: not in enabled drivers build config 00:02:00.098 bus/fslmc: not in enabled drivers build config 00:02:00.098 bus/ifpga: not in enabled drivers build config 00:02:00.098 bus/platform: not in enabled drivers build config 00:02:00.098 bus/uacce: not in enabled drivers build config 00:02:00.098 bus/vmbus: not in enabled drivers build config 00:02:00.098 common/cnxk: not in enabled drivers build config 00:02:00.098 common/mlx5: not in enabled drivers build config 00:02:00.098 common/nfp: not in enabled drivers build config 00:02:00.098 common/nitrox: not in enabled drivers build config 00:02:00.098 common/qat: not in enabled drivers build config 00:02:00.098 common/sfc_efx: not in enabled drivers build config 00:02:00.098 mempool/bucket: not in enabled drivers build config 00:02:00.098 mempool/cnxk: not in enabled drivers build config 00:02:00.098 mempool/dpaa: not in enabled drivers build config 00:02:00.098 mempool/dpaa2: not in enabled drivers build config 00:02:00.098 mempool/octeontx: not in enabled drivers build config 00:02:00.098 mempool/stack: not in enabled drivers build config 00:02:00.098 dma/cnxk: not in enabled drivers build config 00:02:00.098 dma/dpaa: not in enabled drivers build config 00:02:00.098 dma/dpaa2: not in enabled drivers build config 00:02:00.098 dma/hisilicon: not in enabled drivers build config 00:02:00.098 dma/idxd: not in enabled drivers build config 00:02:00.098 dma/ioat: not in enabled drivers build config 00:02:00.098 dma/skeleton: not in enabled drivers build config 00:02:00.098 net/af_packet: not in enabled drivers build config 00:02:00.098 net/af_xdp: not in enabled drivers build config 00:02:00.098 net/ark: not in enabled drivers build config 00:02:00.098 net/atlantic: not in enabled drivers build config 00:02:00.098 net/avp: not in enabled drivers build config 00:02:00.098 net/axgbe: not in enabled drivers build config 00:02:00.098 net/bnx2x: not in enabled drivers build config 00:02:00.098 net/bnxt: not in enabled drivers build config 00:02:00.098 net/bonding: not in enabled drivers build config 00:02:00.098 net/cnxk: not in enabled drivers build config 00:02:00.098 net/cpfl: not in enabled drivers build config 00:02:00.098 net/cxgbe: not in enabled drivers build config 00:02:00.098 net/dpaa: not in enabled drivers build config 00:02:00.098 net/dpaa2: not in enabled drivers build config 00:02:00.098 net/e1000: not in enabled drivers build config 00:02:00.098 net/ena: not in enabled drivers build config 00:02:00.098 net/enetc: not in enabled drivers build config 00:02:00.098 net/enetfec: not in enabled drivers build config 00:02:00.098 net/enic: not in enabled drivers build config 00:02:00.098 net/failsafe: not in enabled drivers build config 00:02:00.098 net/fm10k: not in enabled drivers build config 00:02:00.098 net/gve: not in enabled drivers build config 00:02:00.098 net/hinic: not in enabled drivers build config 00:02:00.098 net/hns3: not in enabled drivers build config 00:02:00.098 net/i40e: not in enabled drivers build config 00:02:00.098 net/iavf: not in enabled drivers build config 00:02:00.098 net/ice: not in enabled drivers build config 00:02:00.098 net/idpf: not in enabled drivers build config 00:02:00.098 net/igc: not in enabled drivers build config 00:02:00.098 net/ionic: not in enabled drivers build config 00:02:00.098 net/ipn3ke: not in enabled drivers build config 00:02:00.098 net/ixgbe: not in enabled drivers build config 00:02:00.098 net/mana: not in enabled drivers build config 00:02:00.098 net/memif: not in enabled drivers build config 00:02:00.098 net/mlx4: not in enabled drivers build config 00:02:00.098 net/mlx5: not in enabled drivers build config 00:02:00.098 net/mvneta: not in enabled drivers build config 00:02:00.098 net/mvpp2: not in enabled drivers build config 00:02:00.098 net/netvsc: not in enabled drivers build config 00:02:00.098 net/nfb: not in enabled drivers build config 00:02:00.098 net/nfp: not in enabled drivers build config 00:02:00.098 net/ngbe: not in enabled drivers build config 00:02:00.098 net/null: not in enabled drivers build config 00:02:00.098 net/octeontx: not in enabled drivers build config 00:02:00.098 net/octeon_ep: not in enabled drivers build config 00:02:00.098 net/pcap: not in enabled drivers build config 00:02:00.098 net/pfe: not in enabled drivers build config 00:02:00.098 net/qede: not in enabled drivers build config 00:02:00.098 net/ring: not in enabled drivers build config 00:02:00.098 net/sfc: not in enabled drivers build config 00:02:00.098 net/softnic: not in enabled drivers build config 00:02:00.098 net/tap: not in enabled drivers build config 00:02:00.098 net/thunderx: not in enabled drivers build config 00:02:00.098 net/txgbe: not in enabled drivers build config 00:02:00.098 net/vdev_netvsc: not in enabled drivers build config 00:02:00.098 net/vhost: not in enabled drivers build config 00:02:00.098 net/virtio: not in enabled drivers build config 00:02:00.098 net/vmxnet3: not in enabled drivers build config 00:02:00.098 raw/*: missing internal dependency, "rawdev" 00:02:00.098 crypto/armv8: not in enabled drivers build config 00:02:00.098 crypto/bcmfs: not in enabled drivers build config 00:02:00.098 crypto/caam_jr: not in enabled drivers build config 00:02:00.098 crypto/ccp: not in enabled drivers build config 00:02:00.098 crypto/cnxk: not in enabled drivers build config 00:02:00.098 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.098 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.098 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.098 crypto/mlx5: not in enabled drivers build config 00:02:00.098 crypto/mvsam: not in enabled drivers build config 00:02:00.098 crypto/nitrox: not in enabled drivers build config 00:02:00.098 crypto/null: not in enabled drivers build config 00:02:00.098 crypto/octeontx: not in enabled drivers build config 00:02:00.098 crypto/openssl: not in enabled drivers build config 00:02:00.098 crypto/scheduler: not in enabled drivers build config 00:02:00.098 crypto/uadk: not in enabled drivers build config 00:02:00.098 crypto/virtio: not in enabled drivers build config 00:02:00.098 compress/isal: not in enabled drivers build config 00:02:00.098 compress/mlx5: not in enabled drivers build config 00:02:00.098 compress/nitrox: not in enabled drivers build config 00:02:00.098 compress/octeontx: not in enabled drivers build config 00:02:00.098 compress/zlib: not in enabled drivers build config 00:02:00.098 regex/*: missing internal dependency, "regexdev" 00:02:00.098 ml/*: missing internal dependency, "mldev" 00:02:00.098 vdpa/ifc: not in enabled drivers build config 00:02:00.098 vdpa/mlx5: not in enabled drivers build config 00:02:00.098 vdpa/nfp: not in enabled drivers build config 00:02:00.098 vdpa/sfc: not in enabled drivers build config 00:02:00.098 event/*: missing internal dependency, "eventdev" 00:02:00.098 baseband/*: missing internal dependency, "bbdev" 00:02:00.098 gpu/*: missing internal dependency, "gpudev" 00:02:00.098 00:02:00.098 00:02:00.670 Build targets in project: 84 00:02:00.670 00:02:00.670 DPDK 24.03.0 00:02:00.670 00:02:00.670 User defined options 00:02:00.670 buildtype : debug 00:02:00.670 default_library : shared 00:02:00.670 libdir : lib 00:02:00.670 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:00.670 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:00.670 c_link_args : 00:02:00.670 cpu_instruction_set: native 00:02:00.670 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:00.670 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:00.670 enable_docs : false 00:02:00.670 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:00.670 enable_kmods : false 00:02:00.670 max_lcores : 128 00:02:00.670 tests : false 00:02:00.670 00:02:00.670 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.935 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:01.201 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.201 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.201 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.201 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:01.201 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.202 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.202 [7/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:01.202 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.202 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.202 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.202 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:01.202 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:01.202 [13/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.202 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.202 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:01.202 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:01.202 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.202 [18/267] Linking static target lib/librte_kvargs.a 00:02:01.202 [19/267] Linking static target lib/librte_log.a 00:02:01.202 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:01.202 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.202 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:01.202 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:01.202 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:01.202 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:01.202 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:01.202 [27/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:01.461 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.461 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:01.461 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.461 [31/267] Linking static target lib/librte_pci.a 00:02:01.461 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:01.461 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:01.461 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:01.461 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:01.461 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:01.462 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:01.462 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:01.721 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:01.721 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:01.721 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.721 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.721 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.721 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.721 [45/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:01.721 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.722 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:01.722 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:01.722 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:01.722 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.722 [51/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.722 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.722 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.722 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.722 [55/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:01.722 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.722 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:01.722 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:01.722 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:01.722 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.722 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:01.722 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.722 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:01.722 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.722 [65/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.722 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:01.722 [67/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:01.722 [68/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.722 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:01.722 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.722 [71/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.722 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:01.722 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:01.722 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.722 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:01.722 [76/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.722 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:01.722 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:01.722 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:01.722 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:01.722 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.722 [82/267] Linking static target lib/librte_meter.a 00:02:01.722 [83/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.722 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.722 [85/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.722 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.722 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:01.722 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:01.722 [89/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.722 [90/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.722 [91/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:01.722 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.722 [93/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.722 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.722 [95/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:01.722 [96/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.722 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.722 [98/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:01.722 [99/267] Linking static target lib/librte_telemetry.a 00:02:01.722 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.722 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.722 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.722 [103/267] Linking static target lib/librte_ring.a 00:02:01.722 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:01.722 [105/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:01.722 [106/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.722 [107/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:01.722 [108/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:01.722 [109/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:01.722 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:01.722 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.722 [112/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:01.722 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.722 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:01.722 [115/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.722 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.722 [117/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:01.722 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:01.722 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.722 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.722 [121/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:01.722 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.722 [123/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.722 [124/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.722 [125/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:01.722 [126/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.722 [127/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.722 [128/267] Linking static target lib/librte_timer.a 00:02:01.983 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.983 [130/267] Linking static target lib/librte_cmdline.a 00:02:01.983 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:01.983 [132/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.983 [133/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:01.983 [134/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.983 [135/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:01.983 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.983 [137/267] Linking static target lib/librte_mempool.a 00:02:01.983 [138/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:01.983 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:01.983 [140/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.983 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.983 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:01.983 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.983 [144/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.983 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.983 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.983 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.983 [148/267] Linking static target lib/librte_compressdev.a 00:02:01.983 [149/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.983 [150/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:01.983 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.983 [152/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.983 [153/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.983 [154/267] Linking static target lib/librte_power.a 00:02:01.983 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.983 [156/267] Linking static target lib/librte_net.a 00:02:01.983 [157/267] Linking static target lib/librte_dmadev.a 00:02:01.983 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:01.983 [159/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.983 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:01.983 [161/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.983 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:01.983 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.983 [164/267] Linking static target lib/librte_reorder.a 00:02:01.983 [165/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.983 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.983 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:01.983 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:01.983 [169/267] Linking static target lib/librte_rcu.a 00:02:01.983 [170/267] Linking static target lib/librte_eal.a 00:02:01.983 [171/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.983 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.983 [173/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.983 [174/267] Linking target lib/librte_log.so.24.1 00:02:01.983 [175/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:01.983 [176/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:01.983 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:01.983 [178/267] Linking static target lib/librte_security.a 00:02:01.983 [179/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:01.983 [180/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.983 [181/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.983 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:01.983 [183/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:01.983 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.983 [185/267] Linking static target drivers/librte_bus_vdev.a 00:02:01.983 [186/267] Linking static target lib/librte_mbuf.a 00:02:01.983 [187/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:02.244 [188/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:02.244 [189/267] Linking static target lib/librte_hash.a 00:02:02.244 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:02.244 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.244 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:02.244 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:02.244 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:02.244 [195/267] Linking target lib/librte_kvargs.so.24.1 00:02:02.244 [196/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.244 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.244 [198/267] Linking static target drivers/librte_mempool_ring.a 00:02:02.244 [199/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:02.244 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.244 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.244 [202/267] Linking static target drivers/librte_bus_pci.a 00:02:02.244 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:02.244 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.244 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.244 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:02.506 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:02.506 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.506 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.506 [210/267] Linking static target lib/librte_cryptodev.a 00:02:02.506 [211/267] Linking target lib/librte_telemetry.so.24.1 00:02:02.506 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.506 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.506 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.768 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.768 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.768 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.768 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:02.768 [219/267] Linking static target lib/librte_ethdev.a 00:02:02.768 [220/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:03.029 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.029 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.029 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.029 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.289 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.289 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.860 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.860 [228/267] Linking static target lib/librte_vhost.a 00:02:04.803 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.747 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.341 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.728 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.728 [233/267] Linking target lib/librte_eal.so.24.1 00:02:13.728 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.728 [235/267] Linking target lib/librte_ring.so.24.1 00:02:13.728 [236/267] Linking target lib/librte_meter.so.24.1 00:02:13.728 [237/267] Linking target lib/librte_pci.so.24.1 00:02:13.728 [238/267] Linking target lib/librte_timer.so.24.1 00:02:13.728 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:13.728 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.989 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.989 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.989 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.989 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.989 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.989 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:13.989 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:13.989 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.989 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.989 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:14.250 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:14.250 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:14.250 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:14.250 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:14.250 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:14.250 [256/267] Linking target lib/librte_net.so.24.1 00:02:14.250 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:14.510 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.510 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.510 [260/267] Linking target lib/librte_hash.so.24.1 00:02:14.510 [261/267] Linking target lib/librte_security.so.24.1 00:02:14.510 [262/267] Linking target lib/librte_cmdline.so.24.1 00:02:14.510 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:14.510 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.770 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:14.770 [266/267] Linking target lib/librte_power.so.24.1 00:02:14.770 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:14.770 INFO: autodetecting backend as ninja 00:02:14.770 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:19.043 CC lib/ut/ut.o 00:02:19.043 CC lib/log/log.o 00:02:19.043 CC lib/ut_mock/mock.o 00:02:19.043 CC lib/log/log_flags.o 00:02:19.043 CC lib/log/log_deprecated.o 00:02:19.043 LIB libspdk_ut.a 00:02:19.043 LIB libspdk_log.a 00:02:19.043 LIB libspdk_ut_mock.a 00:02:19.043 SO libspdk_ut.so.2.0 00:02:19.043 SO libspdk_ut_mock.so.6.0 00:02:19.043 SO libspdk_log.so.7.1 00:02:19.043 SYMLINK libspdk_ut.so 00:02:19.043 SYMLINK libspdk_ut_mock.so 00:02:19.043 SYMLINK libspdk_log.so 00:02:19.304 CC lib/util/base64.o 00:02:19.304 CC lib/dma/dma.o 00:02:19.304 CXX lib/trace_parser/trace.o 00:02:19.304 CC lib/ioat/ioat.o 00:02:19.304 CC lib/util/bit_array.o 00:02:19.304 CC lib/util/cpuset.o 00:02:19.304 CC lib/util/crc16.o 00:02:19.304 CC lib/util/crc32.o 00:02:19.304 CC lib/util/crc32c.o 00:02:19.304 CC lib/util/crc32_ieee.o 00:02:19.304 CC lib/util/crc64.o 00:02:19.304 CC lib/util/dif.o 00:02:19.304 CC lib/util/fd.o 00:02:19.304 CC lib/util/fd_group.o 00:02:19.304 CC lib/util/file.o 00:02:19.304 CC lib/util/hexlify.o 00:02:19.304 CC lib/util/iov.o 00:02:19.304 CC lib/util/math.o 00:02:19.304 CC lib/util/net.o 00:02:19.304 CC lib/util/pipe.o 00:02:19.304 CC lib/util/strerror_tls.o 00:02:19.304 CC lib/util/string.o 00:02:19.304 CC lib/util/uuid.o 00:02:19.304 CC lib/util/xor.o 00:02:19.304 CC lib/util/zipf.o 00:02:19.304 CC lib/util/md5.o 00:02:19.565 CC lib/vfio_user/host/vfio_user.o 00:02:19.565 CC lib/vfio_user/host/vfio_user_pci.o 00:02:19.565 LIB libspdk_dma.a 00:02:19.565 SO libspdk_dma.so.5.0 00:02:19.565 LIB libspdk_ioat.a 00:02:19.565 SO libspdk_ioat.so.7.0 00:02:19.565 SYMLINK libspdk_dma.so 00:02:19.826 SYMLINK libspdk_ioat.so 00:02:19.826 LIB libspdk_vfio_user.a 00:02:19.826 SO libspdk_vfio_user.so.5.0 00:02:19.826 LIB libspdk_util.a 00:02:19.826 SYMLINK libspdk_vfio_user.so 00:02:19.826 SO libspdk_util.so.10.0 00:02:19.826 LIB libspdk_trace_parser.a 00:02:20.088 SO libspdk_trace_parser.so.6.0 00:02:20.088 SYMLINK libspdk_util.so 00:02:20.088 SYMLINK libspdk_trace_parser.so 00:02:20.350 CC lib/rdma_utils/rdma_utils.o 00:02:20.350 CC lib/vmd/vmd.o 00:02:20.350 CC lib/vmd/led.o 00:02:20.350 CC lib/idxd/idxd.o 00:02:20.350 CC lib/json/json_parse.o 00:02:20.350 CC lib/idxd/idxd_user.o 00:02:20.350 CC lib/json/json_util.o 00:02:20.350 CC lib/rdma_provider/common.o 00:02:20.350 CC lib/idxd/idxd_kernel.o 00:02:20.350 CC lib/json/json_write.o 00:02:20.350 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:20.350 CC lib/env_dpdk/env.o 00:02:20.350 CC lib/conf/conf.o 00:02:20.350 CC lib/env_dpdk/memory.o 00:02:20.350 CC lib/env_dpdk/pci.o 00:02:20.350 CC lib/env_dpdk/init.o 00:02:20.350 CC lib/env_dpdk/threads.o 00:02:20.350 CC lib/env_dpdk/pci_ioat.o 00:02:20.350 CC lib/env_dpdk/pci_virtio.o 00:02:20.350 CC lib/env_dpdk/pci_vmd.o 00:02:20.350 CC lib/env_dpdk/pci_idxd.o 00:02:20.350 CC lib/env_dpdk/pci_event.o 00:02:20.350 CC lib/env_dpdk/sigbus_handler.o 00:02:20.350 CC lib/env_dpdk/pci_dpdk.o 00:02:20.350 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:20.350 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:20.611 LIB libspdk_rdma_provider.a 00:02:20.611 LIB libspdk_conf.a 00:02:20.611 SO libspdk_rdma_provider.so.6.0 00:02:20.611 LIB libspdk_rdma_utils.a 00:02:20.611 SO libspdk_conf.so.6.0 00:02:20.611 LIB libspdk_json.a 00:02:20.611 SYMLINK libspdk_rdma_provider.so 00:02:20.611 SO libspdk_rdma_utils.so.1.0 00:02:20.872 SO libspdk_json.so.6.0 00:02:20.872 SYMLINK libspdk_conf.so 00:02:20.872 SYMLINK libspdk_rdma_utils.so 00:02:20.872 SYMLINK libspdk_json.so 00:02:20.872 LIB libspdk_idxd.a 00:02:21.133 SO libspdk_idxd.so.12.1 00:02:21.133 LIB libspdk_vmd.a 00:02:21.133 SO libspdk_vmd.so.6.0 00:02:21.133 SYMLINK libspdk_idxd.so 00:02:21.133 SYMLINK libspdk_vmd.so 00:02:21.133 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.133 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.133 CC lib/jsonrpc/jsonrpc_client.o 00:02:21.133 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.394 LIB libspdk_jsonrpc.a 00:02:21.394 SO libspdk_jsonrpc.so.6.0 00:02:21.655 SYMLINK libspdk_jsonrpc.so 00:02:21.655 LIB libspdk_env_dpdk.a 00:02:21.655 SO libspdk_env_dpdk.so.15.0 00:02:21.916 SYMLINK libspdk_env_dpdk.so 00:02:21.916 CC lib/rpc/rpc.o 00:02:22.176 LIB libspdk_rpc.a 00:02:22.176 SO libspdk_rpc.so.6.0 00:02:22.176 SYMLINK libspdk_rpc.so 00:02:22.773 CC lib/keyring/keyring.o 00:02:22.773 CC lib/keyring/keyring_rpc.o 00:02:22.773 CC lib/trace/trace.o 00:02:22.773 CC lib/notify/notify.o 00:02:22.773 CC lib/trace/trace_flags.o 00:02:22.773 CC lib/notify/notify_rpc.o 00:02:22.773 CC lib/trace/trace_rpc.o 00:02:22.774 LIB libspdk_notify.a 00:02:22.774 SO libspdk_notify.so.6.0 00:02:22.774 LIB libspdk_keyring.a 00:02:22.774 LIB libspdk_trace.a 00:02:22.774 SO libspdk_keyring.so.2.0 00:02:22.774 SYMLINK libspdk_notify.so 00:02:22.774 SO libspdk_trace.so.11.0 00:02:23.033 SYMLINK libspdk_keyring.so 00:02:23.033 SYMLINK libspdk_trace.so 00:02:23.294 CC lib/thread/thread.o 00:02:23.294 CC lib/thread/iobuf.o 00:02:23.294 CC lib/sock/sock.o 00:02:23.294 CC lib/sock/sock_rpc.o 00:02:23.867 LIB libspdk_sock.a 00:02:23.867 SO libspdk_sock.so.10.0 00:02:23.867 SYMLINK libspdk_sock.so 00:02:24.128 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:24.128 CC lib/nvme/nvme_ctrlr.o 00:02:24.128 CC lib/nvme/nvme_fabric.o 00:02:24.128 CC lib/nvme/nvme_ns_cmd.o 00:02:24.128 CC lib/nvme/nvme_ns.o 00:02:24.128 CC lib/nvme/nvme_pcie_common.o 00:02:24.128 CC lib/nvme/nvme_pcie.o 00:02:24.128 CC lib/nvme/nvme_qpair.o 00:02:24.128 CC lib/nvme/nvme.o 00:02:24.128 CC lib/nvme/nvme_quirks.o 00:02:24.128 CC lib/nvme/nvme_transport.o 00:02:24.128 CC lib/nvme/nvme_discovery.o 00:02:24.128 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:24.128 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:24.128 CC lib/nvme/nvme_tcp.o 00:02:24.128 CC lib/nvme/nvme_opal.o 00:02:24.128 CC lib/nvme/nvme_io_msg.o 00:02:24.128 CC lib/nvme/nvme_poll_group.o 00:02:24.128 CC lib/nvme/nvme_zns.o 00:02:24.128 CC lib/nvme/nvme_stubs.o 00:02:24.128 CC lib/nvme/nvme_auth.o 00:02:24.128 CC lib/nvme/nvme_cuse.o 00:02:24.128 CC lib/nvme/nvme_vfio_user.o 00:02:24.128 CC lib/nvme/nvme_rdma.o 00:02:24.699 LIB libspdk_thread.a 00:02:24.699 SO libspdk_thread.so.10.2 00:02:24.699 SYMLINK libspdk_thread.so 00:02:25.271 CC lib/blob/blobstore.o 00:02:25.271 CC lib/blob/request.o 00:02:25.271 CC lib/blob/zeroes.o 00:02:25.271 CC lib/vfu_tgt/tgt_endpoint.o 00:02:25.271 CC lib/blob/blob_bs_dev.o 00:02:25.271 CC lib/vfu_tgt/tgt_rpc.o 00:02:25.271 CC lib/accel/accel.o 00:02:25.271 CC lib/accel/accel_rpc.o 00:02:25.271 CC lib/accel/accel_sw.o 00:02:25.271 CC lib/init/json_config.o 00:02:25.271 CC lib/init/subsystem.o 00:02:25.271 CC lib/init/subsystem_rpc.o 00:02:25.271 CC lib/virtio/virtio.o 00:02:25.271 CC lib/fsdev/fsdev.o 00:02:25.271 CC lib/init/rpc.o 00:02:25.271 CC lib/virtio/virtio_vhost_user.o 00:02:25.271 CC lib/fsdev/fsdev_io.o 00:02:25.271 CC lib/virtio/virtio_vfio_user.o 00:02:25.271 CC lib/fsdev/fsdev_rpc.o 00:02:25.271 CC lib/virtio/virtio_pci.o 00:02:25.533 LIB libspdk_init.a 00:02:25.533 SO libspdk_init.so.6.0 00:02:25.533 LIB libspdk_vfu_tgt.a 00:02:25.533 LIB libspdk_virtio.a 00:02:25.533 SO libspdk_vfu_tgt.so.3.0 00:02:25.533 SO libspdk_virtio.so.7.0 00:02:25.533 SYMLINK libspdk_init.so 00:02:25.533 SYMLINK libspdk_vfu_tgt.so 00:02:25.533 SYMLINK libspdk_virtio.so 00:02:25.793 LIB libspdk_fsdev.a 00:02:25.793 SO libspdk_fsdev.so.1.0 00:02:25.793 CC lib/event/app.o 00:02:25.793 CC lib/event/reactor.o 00:02:25.793 CC lib/event/log_rpc.o 00:02:25.793 CC lib/event/app_rpc.o 00:02:25.793 SYMLINK libspdk_fsdev.so 00:02:25.793 CC lib/event/scheduler_static.o 00:02:26.054 LIB libspdk_accel.a 00:02:26.054 LIB libspdk_nvme.a 00:02:26.054 SO libspdk_accel.so.16.0 00:02:26.315 SYMLINK libspdk_accel.so 00:02:26.315 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:26.315 SO libspdk_nvme.so.14.0 00:02:26.315 LIB libspdk_event.a 00:02:26.315 SO libspdk_event.so.14.0 00:02:26.576 SYMLINK libspdk_event.so 00:02:26.576 SYMLINK libspdk_nvme.so 00:02:26.576 CC lib/bdev/bdev.o 00:02:26.576 CC lib/bdev/bdev_rpc.o 00:02:26.576 CC lib/bdev/bdev_zone.o 00:02:26.576 CC lib/bdev/part.o 00:02:26.576 CC lib/bdev/scsi_nvme.o 00:02:26.837 LIB libspdk_fuse_dispatcher.a 00:02:26.837 SO libspdk_fuse_dispatcher.so.1.0 00:02:26.837 SYMLINK libspdk_fuse_dispatcher.so 00:02:27.780 LIB libspdk_blob.a 00:02:27.780 SO libspdk_blob.so.11.0 00:02:28.040 SYMLINK libspdk_blob.so 00:02:28.301 CC lib/blobfs/blobfs.o 00:02:28.301 CC lib/blobfs/tree.o 00:02:28.301 CC lib/lvol/lvol.o 00:02:28.872 LIB libspdk_bdev.a 00:02:28.872 SO libspdk_bdev.so.17.0 00:02:29.133 LIB libspdk_blobfs.a 00:02:29.133 SO libspdk_blobfs.so.10.0 00:02:29.133 SYMLINK libspdk_bdev.so 00:02:29.133 LIB libspdk_lvol.a 00:02:29.133 SYMLINK libspdk_blobfs.so 00:02:29.133 SO libspdk_lvol.so.10.0 00:02:29.133 SYMLINK libspdk_lvol.so 00:02:29.393 CC lib/scsi/dev.o 00:02:29.393 CC lib/scsi/lun.o 00:02:29.393 CC lib/scsi/port.o 00:02:29.393 CC lib/scsi/scsi.o 00:02:29.393 CC lib/scsi/scsi_bdev.o 00:02:29.393 CC lib/scsi/scsi_pr.o 00:02:29.393 CC lib/scsi/scsi_rpc.o 00:02:29.393 CC lib/ublk/ublk.o 00:02:29.393 CC lib/scsi/task.o 00:02:29.393 CC lib/ublk/ublk_rpc.o 00:02:29.393 CC lib/nvmf/ctrlr.o 00:02:29.393 CC lib/nbd/nbd.o 00:02:29.393 CC lib/ftl/ftl_core.o 00:02:29.393 CC lib/nvmf/ctrlr_discovery.o 00:02:29.393 CC lib/nbd/nbd_rpc.o 00:02:29.393 CC lib/ftl/ftl_init.o 00:02:29.393 CC lib/nvmf/ctrlr_bdev.o 00:02:29.393 CC lib/ftl/ftl_layout.o 00:02:29.393 CC lib/nvmf/subsystem.o 00:02:29.393 CC lib/ftl/ftl_debug.o 00:02:29.393 CC lib/nvmf/nvmf.o 00:02:29.393 CC lib/ftl/ftl_io.o 00:02:29.393 CC lib/nvmf/nvmf_rpc.o 00:02:29.393 CC lib/ftl/ftl_sb.o 00:02:29.393 CC lib/nvmf/transport.o 00:02:29.393 CC lib/ftl/ftl_l2p.o 00:02:29.393 CC lib/ftl/ftl_l2p_flat.o 00:02:29.393 CC lib/nvmf/tcp.o 00:02:29.393 CC lib/ftl/ftl_nv_cache.o 00:02:29.393 CC lib/nvmf/stubs.o 00:02:29.393 CC lib/ftl/ftl_band.o 00:02:29.393 CC lib/ftl/ftl_band_ops.o 00:02:29.393 CC lib/nvmf/mdns_server.o 00:02:29.393 CC lib/ftl/ftl_writer.o 00:02:29.393 CC lib/nvmf/vfio_user.o 00:02:29.393 CC lib/ftl/ftl_rq.o 00:02:29.393 CC lib/nvmf/rdma.o 00:02:29.393 CC lib/ftl/ftl_reloc.o 00:02:29.393 CC lib/nvmf/auth.o 00:02:29.393 CC lib/ftl/ftl_l2p_cache.o 00:02:29.393 CC lib/ftl/ftl_p2l.o 00:02:29.393 CC lib/ftl/ftl_p2l_log.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:29.393 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:29.393 CC lib/ftl/utils/ftl_conf.o 00:02:29.393 CC lib/ftl/utils/ftl_md.o 00:02:29.393 CC lib/ftl/utils/ftl_mempool.o 00:02:29.393 CC lib/ftl/utils/ftl_bitmap.o 00:02:29.393 CC lib/ftl/utils/ftl_property.o 00:02:29.393 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:29.393 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:29.393 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:29.393 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:29.393 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:29.393 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:29.393 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:29.393 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:29.393 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:29.393 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:29.393 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:29.393 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:29.393 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:29.393 CC lib/ftl/base/ftl_base_dev.o 00:02:29.393 CC lib/ftl/base/ftl_base_bdev.o 00:02:29.393 CC lib/ftl/ftl_trace.o 00:02:29.961 LIB libspdk_nbd.a 00:02:29.961 SO libspdk_nbd.so.7.0 00:02:29.961 LIB libspdk_scsi.a 00:02:29.961 SO libspdk_scsi.so.9.0 00:02:30.221 SYMLINK libspdk_nbd.so 00:02:30.222 LIB libspdk_ublk.a 00:02:30.222 SYMLINK libspdk_scsi.so 00:02:30.222 SO libspdk_ublk.so.3.0 00:02:30.222 SYMLINK libspdk_ublk.so 00:02:30.483 LIB libspdk_ftl.a 00:02:30.483 CC lib/iscsi/conn.o 00:02:30.483 CC lib/iscsi/init_grp.o 00:02:30.483 CC lib/iscsi/iscsi.o 00:02:30.483 CC lib/iscsi/param.o 00:02:30.483 CC lib/iscsi/portal_grp.o 00:02:30.483 CC lib/iscsi/tgt_node.o 00:02:30.483 CC lib/vhost/vhost.o 00:02:30.483 CC lib/iscsi/iscsi_subsystem.o 00:02:30.483 CC lib/vhost/vhost_rpc.o 00:02:30.483 CC lib/iscsi/iscsi_rpc.o 00:02:30.483 CC lib/vhost/vhost_scsi.o 00:02:30.483 CC lib/iscsi/task.o 00:02:30.483 CC lib/vhost/vhost_blk.o 00:02:30.483 CC lib/vhost/rte_vhost_user.o 00:02:30.743 SO libspdk_ftl.so.9.0 00:02:31.006 SYMLINK libspdk_ftl.so 00:02:31.579 LIB libspdk_nvmf.a 00:02:31.579 SO libspdk_nvmf.so.19.0 00:02:31.579 LIB libspdk_vhost.a 00:02:31.579 SO libspdk_vhost.so.8.0 00:02:31.579 SYMLINK libspdk_vhost.so 00:02:31.841 SYMLINK libspdk_nvmf.so 00:02:31.841 LIB libspdk_iscsi.a 00:02:31.841 SO libspdk_iscsi.so.8.0 00:02:32.103 SYMLINK libspdk_iscsi.so 00:02:32.675 CC module/vfu_device/vfu_virtio.o 00:02:32.675 CC module/vfu_device/vfu_virtio_scsi.o 00:02:32.675 CC module/vfu_device/vfu_virtio_blk.o 00:02:32.675 CC module/vfu_device/vfu_virtio_rpc.o 00:02:32.675 CC module/vfu_device/vfu_virtio_fs.o 00:02:32.675 CC module/env_dpdk/env_dpdk_rpc.o 00:02:32.675 LIB libspdk_env_dpdk_rpc.a 00:02:32.675 CC module/accel/error/accel_error_rpc.o 00:02:32.675 CC module/accel/error/accel_error.o 00:02:32.675 CC module/sock/posix/posix.o 00:02:32.675 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:32.675 CC module/keyring/file/keyring.o 00:02:32.675 CC module/fsdev/aio/fsdev_aio.o 00:02:32.675 CC module/keyring/file/keyring_rpc.o 00:02:32.675 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:32.675 CC module/blob/bdev/blob_bdev.o 00:02:32.675 CC module/fsdev/aio/linux_aio_mgr.o 00:02:32.675 CC module/accel/iaa/accel_iaa.o 00:02:32.675 CC module/accel/iaa/accel_iaa_rpc.o 00:02:32.675 CC module/accel/ioat/accel_ioat.o 00:02:32.675 CC module/keyring/linux/keyring.o 00:02:32.675 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:32.675 CC module/accel/ioat/accel_ioat_rpc.o 00:02:32.675 CC module/keyring/linux/keyring_rpc.o 00:02:32.675 CC module/accel/dsa/accel_dsa.o 00:02:32.675 CC module/accel/dsa/accel_dsa_rpc.o 00:02:32.675 CC module/scheduler/gscheduler/gscheduler.o 00:02:32.675 SO libspdk_env_dpdk_rpc.so.6.0 00:02:32.937 SYMLINK libspdk_env_dpdk_rpc.so 00:02:32.937 LIB libspdk_keyring_linux.a 00:02:32.937 LIB libspdk_accel_error.a 00:02:32.937 LIB libspdk_keyring_file.a 00:02:32.937 LIB libspdk_scheduler_dpdk_governor.a 00:02:32.937 LIB libspdk_scheduler_gscheduler.a 00:02:32.937 LIB libspdk_accel_ioat.a 00:02:32.937 SO libspdk_keyring_linux.so.1.0 00:02:32.937 LIB libspdk_scheduler_dynamic.a 00:02:32.937 SO libspdk_accel_error.so.2.0 00:02:32.937 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:32.937 SO libspdk_keyring_file.so.2.0 00:02:32.937 SO libspdk_scheduler_gscheduler.so.4.0 00:02:32.937 LIB libspdk_accel_iaa.a 00:02:32.937 SO libspdk_accel_ioat.so.6.0 00:02:32.937 SO libspdk_scheduler_dynamic.so.4.0 00:02:33.198 SO libspdk_accel_iaa.so.3.0 00:02:33.198 LIB libspdk_blob_bdev.a 00:02:33.198 SYMLINK libspdk_keyring_linux.so 00:02:33.198 SYMLINK libspdk_accel_error.so 00:02:33.198 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:33.198 SYMLINK libspdk_scheduler_gscheduler.so 00:02:33.198 SYMLINK libspdk_keyring_file.so 00:02:33.198 LIB libspdk_accel_dsa.a 00:02:33.198 SYMLINK libspdk_accel_ioat.so 00:02:33.198 SYMLINK libspdk_scheduler_dynamic.so 00:02:33.198 SO libspdk_blob_bdev.so.11.0 00:02:33.198 SO libspdk_accel_dsa.so.5.0 00:02:33.198 SYMLINK libspdk_accel_iaa.so 00:02:33.198 LIB libspdk_vfu_device.a 00:02:33.198 SYMLINK libspdk_blob_bdev.so 00:02:33.198 SO libspdk_vfu_device.so.3.0 00:02:33.198 SYMLINK libspdk_accel_dsa.so 00:02:33.198 SYMLINK libspdk_vfu_device.so 00:02:33.460 LIB libspdk_fsdev_aio.a 00:02:33.460 SO libspdk_fsdev_aio.so.1.0 00:02:33.460 LIB libspdk_sock_posix.a 00:02:33.460 SYMLINK libspdk_fsdev_aio.so 00:02:33.460 SO libspdk_sock_posix.so.6.0 00:02:33.720 SYMLINK libspdk_sock_posix.so 00:02:33.720 CC module/bdev/delay/vbdev_delay.o 00:02:33.720 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.720 CC module/bdev/ftl/bdev_ftl.o 00:02:33.720 CC module/bdev/error/vbdev_error.o 00:02:33.720 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.720 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.720 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.720 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.720 CC module/bdev/gpt/gpt.o 00:02:33.720 CC module/bdev/malloc/bdev_malloc.o 00:02:33.720 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.720 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.720 CC module/bdev/null/bdev_null.o 00:02:33.720 CC module/bdev/raid/bdev_raid.o 00:02:33.720 CC module/bdev/raid/bdev_raid_rpc.o 00:02:33.720 CC module/bdev/null/bdev_null_rpc.o 00:02:33.720 CC module/bdev/iscsi/bdev_iscsi.o 00:02:33.720 CC module/bdev/raid/bdev_raid_sb.o 00:02:33.720 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.720 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:33.720 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.720 CC module/bdev/raid/raid0.o 00:02:33.720 CC module/bdev/raid/raid1.o 00:02:33.720 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.720 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.720 CC module/bdev/raid/concat.o 00:02:33.720 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.720 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.720 CC module/bdev/split/vbdev_split.o 00:02:33.720 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:33.720 CC module/bdev/nvme/bdev_nvme.o 00:02:33.720 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.720 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:33.720 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.720 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.720 CC module/bdev/nvme/nvme_rpc.o 00:02:33.720 CC module/bdev/nvme/bdev_mdns_client.o 00:02:33.720 CC module/bdev/nvme/vbdev_opal.o 00:02:33.720 CC module/bdev/aio/bdev_aio.o 00:02:33.720 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:33.720 CC module/bdev/aio/bdev_aio_rpc.o 00:02:33.720 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:33.980 LIB libspdk_blobfs_bdev.a 00:02:33.980 SO libspdk_blobfs_bdev.so.6.0 00:02:33.980 LIB libspdk_bdev_error.a 00:02:34.242 LIB libspdk_bdev_ftl.a 00:02:34.242 LIB libspdk_bdev_gpt.a 00:02:34.242 LIB libspdk_bdev_null.a 00:02:34.242 LIB libspdk_bdev_split.a 00:02:34.242 SO libspdk_bdev_error.so.6.0 00:02:34.242 SO libspdk_bdev_gpt.so.6.0 00:02:34.242 SYMLINK libspdk_blobfs_bdev.so 00:02:34.242 SO libspdk_bdev_ftl.so.6.0 00:02:34.242 SO libspdk_bdev_null.so.6.0 00:02:34.242 SO libspdk_bdev_split.so.6.0 00:02:34.242 LIB libspdk_bdev_passthru.a 00:02:34.242 LIB libspdk_bdev_delay.a 00:02:34.242 SYMLINK libspdk_bdev_error.so 00:02:34.242 LIB libspdk_bdev_malloc.a 00:02:34.242 SO libspdk_bdev_passthru.so.6.0 00:02:34.242 LIB libspdk_bdev_aio.a 00:02:34.242 LIB libspdk_bdev_zone_block.a 00:02:34.242 SYMLINK libspdk_bdev_ftl.so 00:02:34.242 SYMLINK libspdk_bdev_gpt.so 00:02:34.242 LIB libspdk_bdev_iscsi.a 00:02:34.242 SYMLINK libspdk_bdev_null.so 00:02:34.242 SO libspdk_bdev_delay.so.6.0 00:02:34.242 SYMLINK libspdk_bdev_split.so 00:02:34.242 SO libspdk_bdev_malloc.so.6.0 00:02:34.242 SO libspdk_bdev_zone_block.so.6.0 00:02:34.242 SO libspdk_bdev_aio.so.6.0 00:02:34.242 SYMLINK libspdk_bdev_passthru.so 00:02:34.242 SO libspdk_bdev_iscsi.so.6.0 00:02:34.242 SYMLINK libspdk_bdev_delay.so 00:02:34.242 SYMLINK libspdk_bdev_malloc.so 00:02:34.242 SYMLINK libspdk_bdev_zone_block.so 00:02:34.242 SYMLINK libspdk_bdev_aio.so 00:02:34.242 LIB libspdk_bdev_lvol.a 00:02:34.242 SYMLINK libspdk_bdev_iscsi.so 00:02:34.242 LIB libspdk_bdev_virtio.a 00:02:34.504 SO libspdk_bdev_lvol.so.6.0 00:02:34.504 SO libspdk_bdev_virtio.so.6.0 00:02:34.504 SYMLINK libspdk_bdev_lvol.so 00:02:34.504 SYMLINK libspdk_bdev_virtio.so 00:02:34.766 LIB libspdk_bdev_raid.a 00:02:34.766 SO libspdk_bdev_raid.so.6.0 00:02:35.026 SYMLINK libspdk_bdev_raid.so 00:02:35.970 LIB libspdk_bdev_nvme.a 00:02:35.970 SO libspdk_bdev_nvme.so.7.0 00:02:35.970 SYMLINK libspdk_bdev_nvme.so 00:02:36.913 CC module/event/subsystems/iobuf/iobuf.o 00:02:36.913 CC module/event/subsystems/vmd/vmd.o 00:02:36.913 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:36.913 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:36.913 CC module/event/subsystems/sock/sock.o 00:02:36.913 CC module/event/subsystems/scheduler/scheduler.o 00:02:36.913 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:36.913 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:36.913 CC module/event/subsystems/keyring/keyring.o 00:02:36.913 CC module/event/subsystems/fsdev/fsdev.o 00:02:36.913 LIB libspdk_event_fsdev.a 00:02:36.913 LIB libspdk_event_vfu_tgt.a 00:02:36.913 LIB libspdk_event_vmd.a 00:02:36.913 LIB libspdk_event_keyring.a 00:02:36.913 LIB libspdk_event_scheduler.a 00:02:36.913 LIB libspdk_event_vhost_blk.a 00:02:36.913 LIB libspdk_event_sock.a 00:02:36.913 LIB libspdk_event_iobuf.a 00:02:36.913 SO libspdk_event_fsdev.so.1.0 00:02:36.913 SO libspdk_event_vfu_tgt.so.3.0 00:02:36.913 SO libspdk_event_vmd.so.6.0 00:02:36.913 SO libspdk_event_scheduler.so.4.0 00:02:36.913 SO libspdk_event_keyring.so.1.0 00:02:36.913 SO libspdk_event_vhost_blk.so.3.0 00:02:36.913 SO libspdk_event_sock.so.5.0 00:02:36.913 SO libspdk_event_iobuf.so.3.0 00:02:37.174 SYMLINK libspdk_event_fsdev.so 00:02:37.174 SYMLINK libspdk_event_sock.so 00:02:37.174 SYMLINK libspdk_event_vfu_tgt.so 00:02:37.174 SYMLINK libspdk_event_vmd.so 00:02:37.174 SYMLINK libspdk_event_scheduler.so 00:02:37.174 SYMLINK libspdk_event_vhost_blk.so 00:02:37.174 SYMLINK libspdk_event_keyring.so 00:02:37.174 SYMLINK libspdk_event_iobuf.so 00:02:37.435 CC module/event/subsystems/accel/accel.o 00:02:37.696 LIB libspdk_event_accel.a 00:02:37.696 SO libspdk_event_accel.so.6.0 00:02:37.696 SYMLINK libspdk_event_accel.so 00:02:37.965 CC module/event/subsystems/bdev/bdev.o 00:02:38.227 LIB libspdk_event_bdev.a 00:02:38.227 SO libspdk_event_bdev.so.6.0 00:02:38.227 SYMLINK libspdk_event_bdev.so 00:02:38.800 CC module/event/subsystems/scsi/scsi.o 00:02:38.800 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:38.800 CC module/event/subsystems/ublk/ublk.o 00:02:38.800 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:38.800 CC module/event/subsystems/nbd/nbd.o 00:02:38.800 LIB libspdk_event_ublk.a 00:02:38.800 LIB libspdk_event_nbd.a 00:02:38.800 LIB libspdk_event_scsi.a 00:02:38.800 SO libspdk_event_nbd.so.6.0 00:02:38.800 SO libspdk_event_ublk.so.3.0 00:02:39.061 SO libspdk_event_scsi.so.6.0 00:02:39.061 LIB libspdk_event_nvmf.a 00:02:39.061 SYMLINK libspdk_event_nbd.so 00:02:39.061 SYMLINK libspdk_event_ublk.so 00:02:39.061 SYMLINK libspdk_event_scsi.so 00:02:39.061 SO libspdk_event_nvmf.so.6.0 00:02:39.061 SYMLINK libspdk_event_nvmf.so 00:02:39.323 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.323 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.585 LIB libspdk_event_vhost_scsi.a 00:02:39.585 LIB libspdk_event_iscsi.a 00:02:39.585 SO libspdk_event_vhost_scsi.so.3.0 00:02:39.585 SO libspdk_event_iscsi.so.6.0 00:02:39.585 SYMLINK libspdk_event_vhost_scsi.so 00:02:39.585 SYMLINK libspdk_event_iscsi.so 00:02:39.847 SO libspdk.so.6.0 00:02:39.847 SYMLINK libspdk.so 00:02:40.423 CXX app/trace/trace.o 00:02:40.423 CC app/trace_record/trace_record.o 00:02:40.423 CC app/spdk_nvme_perf/perf.o 00:02:40.423 CC app/spdk_top/spdk_top.o 00:02:40.423 CC app/spdk_lspci/spdk_lspci.o 00:02:40.423 TEST_HEADER include/spdk/accel.h 00:02:40.423 TEST_HEADER include/spdk/accel_module.h 00:02:40.424 TEST_HEADER include/spdk/assert.h 00:02:40.424 CC test/rpc_client/rpc_client_test.o 00:02:40.424 CC app/spdk_nvme_identify/identify.o 00:02:40.424 TEST_HEADER include/spdk/barrier.h 00:02:40.424 TEST_HEADER include/spdk/base64.h 00:02:40.424 CC app/spdk_nvme_discover/discovery_aer.o 00:02:40.424 TEST_HEADER include/spdk/bdev.h 00:02:40.424 TEST_HEADER include/spdk/bdev_module.h 00:02:40.424 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.424 TEST_HEADER include/spdk/bit_pool.h 00:02:40.424 TEST_HEADER include/spdk/bit_array.h 00:02:40.424 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.424 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.424 TEST_HEADER include/spdk/blob.h 00:02:40.424 TEST_HEADER include/spdk/blobfs.h 00:02:40.424 TEST_HEADER include/spdk/conf.h 00:02:40.424 TEST_HEADER include/spdk/config.h 00:02:40.424 TEST_HEADER include/spdk/cpuset.h 00:02:40.424 TEST_HEADER include/spdk/crc16.h 00:02:40.424 TEST_HEADER include/spdk/crc32.h 00:02:40.424 TEST_HEADER include/spdk/crc64.h 00:02:40.424 TEST_HEADER include/spdk/dif.h 00:02:40.424 TEST_HEADER include/spdk/dma.h 00:02:40.424 TEST_HEADER include/spdk/endian.h 00:02:40.424 TEST_HEADER include/spdk/env.h 00:02:40.424 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.424 TEST_HEADER include/spdk/event.h 00:02:40.424 TEST_HEADER include/spdk/fd_group.h 00:02:40.424 TEST_HEADER include/spdk/fd.h 00:02:40.424 TEST_HEADER include/spdk/file.h 00:02:40.424 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:40.424 TEST_HEADER include/spdk/fsdev.h 00:02:40.424 TEST_HEADER include/spdk/fsdev_module.h 00:02:40.424 TEST_HEADER include/spdk/ftl.h 00:02:40.424 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:40.424 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.424 CC app/iscsi_tgt/iscsi_tgt.o 00:02:40.424 TEST_HEADER include/spdk/hexlify.h 00:02:40.424 TEST_HEADER include/spdk/histogram_data.h 00:02:40.424 TEST_HEADER include/spdk/idxd.h 00:02:40.424 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.424 TEST_HEADER include/spdk/init.h 00:02:40.424 CC app/spdk_dd/spdk_dd.o 00:02:40.424 TEST_HEADER include/spdk/ioat.h 00:02:40.424 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.424 TEST_HEADER include/spdk/json.h 00:02:40.424 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.424 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.424 CC app/nvmf_tgt/nvmf_main.o 00:02:40.424 TEST_HEADER include/spdk/keyring.h 00:02:40.424 TEST_HEADER include/spdk/keyring_module.h 00:02:40.424 TEST_HEADER include/spdk/likely.h 00:02:40.424 TEST_HEADER include/spdk/log.h 00:02:40.424 TEST_HEADER include/spdk/md5.h 00:02:40.424 TEST_HEADER include/spdk/lvol.h 00:02:40.424 TEST_HEADER include/spdk/memory.h 00:02:40.424 TEST_HEADER include/spdk/mmio.h 00:02:40.424 TEST_HEADER include/spdk/nbd.h 00:02:40.424 TEST_HEADER include/spdk/net.h 00:02:40.424 TEST_HEADER include/spdk/notify.h 00:02:40.424 TEST_HEADER include/spdk/nvme.h 00:02:40.424 CC app/spdk_tgt/spdk_tgt.o 00:02:40.424 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.424 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.424 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.424 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.424 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.424 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.424 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.424 TEST_HEADER include/spdk/nvmf.h 00:02:40.424 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.424 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.424 TEST_HEADER include/spdk/opal.h 00:02:40.424 TEST_HEADER include/spdk/opal_spec.h 00:02:40.424 TEST_HEADER include/spdk/pci_ids.h 00:02:40.424 TEST_HEADER include/spdk/pipe.h 00:02:40.424 TEST_HEADER include/spdk/queue.h 00:02:40.424 TEST_HEADER include/spdk/rpc.h 00:02:40.424 TEST_HEADER include/spdk/reduce.h 00:02:40.424 TEST_HEADER include/spdk/scheduler.h 00:02:40.424 TEST_HEADER include/spdk/scsi.h 00:02:40.424 TEST_HEADER include/spdk/scsi_spec.h 00:02:40.424 TEST_HEADER include/spdk/sock.h 00:02:40.424 TEST_HEADER include/spdk/stdinc.h 00:02:40.424 TEST_HEADER include/spdk/string.h 00:02:40.424 TEST_HEADER include/spdk/thread.h 00:02:40.424 TEST_HEADER include/spdk/trace.h 00:02:40.424 TEST_HEADER include/spdk/trace_parser.h 00:02:40.424 TEST_HEADER include/spdk/tree.h 00:02:40.424 TEST_HEADER include/spdk/ublk.h 00:02:40.424 TEST_HEADER include/spdk/util.h 00:02:40.424 TEST_HEADER include/spdk/uuid.h 00:02:40.424 TEST_HEADER include/spdk/version.h 00:02:40.424 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:40.424 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:40.424 TEST_HEADER include/spdk/vhost.h 00:02:40.424 TEST_HEADER include/spdk/vmd.h 00:02:40.424 TEST_HEADER include/spdk/zipf.h 00:02:40.424 TEST_HEADER include/spdk/xor.h 00:02:40.424 CXX test/cpp_headers/accel.o 00:02:40.424 CXX test/cpp_headers/accel_module.o 00:02:40.424 CXX test/cpp_headers/assert.o 00:02:40.424 CXX test/cpp_headers/barrier.o 00:02:40.424 CXX test/cpp_headers/base64.o 00:02:40.424 CXX test/cpp_headers/bdev.o 00:02:40.424 CXX test/cpp_headers/bdev_module.o 00:02:40.424 CXX test/cpp_headers/bit_array.o 00:02:40.424 CXX test/cpp_headers/bdev_zone.o 00:02:40.424 CXX test/cpp_headers/bit_pool.o 00:02:40.424 CXX test/cpp_headers/blob_bdev.o 00:02:40.424 CXX test/cpp_headers/blobfs_bdev.o 00:02:40.424 CXX test/cpp_headers/blob.o 00:02:40.424 CXX test/cpp_headers/blobfs.o 00:02:40.424 CXX test/cpp_headers/conf.o 00:02:40.424 CXX test/cpp_headers/config.o 00:02:40.424 CXX test/cpp_headers/crc16.o 00:02:40.424 CXX test/cpp_headers/cpuset.o 00:02:40.424 CXX test/cpp_headers/crc32.o 00:02:40.424 CXX test/cpp_headers/dif.o 00:02:40.424 CXX test/cpp_headers/crc64.o 00:02:40.424 CXX test/cpp_headers/dma.o 00:02:40.424 CXX test/cpp_headers/endian.o 00:02:40.424 CXX test/cpp_headers/env_dpdk.o 00:02:40.424 CXX test/cpp_headers/env.o 00:02:40.424 CXX test/cpp_headers/event.o 00:02:40.424 CXX test/cpp_headers/fd_group.o 00:02:40.424 CXX test/cpp_headers/fd.o 00:02:40.424 CXX test/cpp_headers/file.o 00:02:40.424 CXX test/cpp_headers/fsdev_module.o 00:02:40.424 CXX test/cpp_headers/fsdev.o 00:02:40.424 CXX test/cpp_headers/ftl.o 00:02:40.424 CXX test/cpp_headers/hexlify.o 00:02:40.424 CXX test/cpp_headers/fuse_dispatcher.o 00:02:40.424 CXX test/cpp_headers/gpt_spec.o 00:02:40.424 CXX test/cpp_headers/histogram_data.o 00:02:40.424 CXX test/cpp_headers/idxd.o 00:02:40.424 CXX test/cpp_headers/init.o 00:02:40.424 CXX test/cpp_headers/idxd_spec.o 00:02:40.424 CXX test/cpp_headers/ioat.o 00:02:40.424 CXX test/cpp_headers/ioat_spec.o 00:02:40.424 CXX test/cpp_headers/json.o 00:02:40.424 CXX test/cpp_headers/iscsi_spec.o 00:02:40.424 CXX test/cpp_headers/keyring.o 00:02:40.424 CXX test/cpp_headers/jsonrpc.o 00:02:40.424 CXX test/cpp_headers/keyring_module.o 00:02:40.424 CXX test/cpp_headers/lvol.o 00:02:40.424 CXX test/cpp_headers/likely.o 00:02:40.424 CXX test/cpp_headers/log.o 00:02:40.424 CC examples/ioat/perf/perf.o 00:02:40.424 CXX test/cpp_headers/md5.o 00:02:40.424 CXX test/cpp_headers/net.o 00:02:40.424 CXX test/cpp_headers/memory.o 00:02:40.424 CXX test/cpp_headers/mmio.o 00:02:40.424 CXX test/cpp_headers/nbd.o 00:02:40.424 CXX test/cpp_headers/nvme.o 00:02:40.424 CXX test/cpp_headers/notify.o 00:02:40.424 CC test/thread/poller_perf/poller_perf.o 00:02:40.424 CXX test/cpp_headers/nvme_intel.o 00:02:40.424 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:40.424 CXX test/cpp_headers/nvmf_cmd.o 00:02:40.424 CXX test/cpp_headers/nvme_ocssd.o 00:02:40.424 CXX test/cpp_headers/nvme_spec.o 00:02:40.424 CXX test/cpp_headers/nvmf.o 00:02:40.424 CC examples/util/zipf/zipf.o 00:02:40.424 CXX test/cpp_headers/nvmf_spec.o 00:02:40.424 CXX test/cpp_headers/nvme_zns.o 00:02:40.424 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:40.424 CC examples/ioat/verify/verify.o 00:02:40.424 CXX test/cpp_headers/opal.o 00:02:40.424 CXX test/cpp_headers/nvmf_transport.o 00:02:40.424 CC test/app/jsoncat/jsoncat.o 00:02:40.424 CXX test/cpp_headers/opal_spec.o 00:02:40.424 CC test/env/vtophys/vtophys.o 00:02:40.424 CXX test/cpp_headers/queue.o 00:02:40.424 CXX test/cpp_headers/pci_ids.o 00:02:40.424 CXX test/cpp_headers/pipe.o 00:02:40.424 CXX test/cpp_headers/reduce.o 00:02:40.690 CXX test/cpp_headers/scsi.o 00:02:40.690 CXX test/cpp_headers/scheduler.o 00:02:40.690 CXX test/cpp_headers/rpc.o 00:02:40.690 LINK spdk_lspci 00:02:40.690 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:40.690 CXX test/cpp_headers/scsi_spec.o 00:02:40.690 CXX test/cpp_headers/string.o 00:02:40.690 CXX test/cpp_headers/sock.o 00:02:40.690 CC test/app/stub/stub.o 00:02:40.690 CC test/env/pci/pci_ut.o 00:02:40.690 CXX test/cpp_headers/stdinc.o 00:02:40.690 CXX test/cpp_headers/thread.o 00:02:40.690 CXX test/cpp_headers/trace_parser.o 00:02:40.690 CXX test/cpp_headers/trace.o 00:02:40.690 CXX test/cpp_headers/ublk.o 00:02:40.690 CC app/fio/nvme/fio_plugin.o 00:02:40.690 CXX test/cpp_headers/tree.o 00:02:40.690 CXX test/cpp_headers/util.o 00:02:40.690 CXX test/cpp_headers/uuid.o 00:02:40.690 CXX test/cpp_headers/vfio_user_pci.o 00:02:40.690 CXX test/cpp_headers/version.o 00:02:40.690 CXX test/cpp_headers/vfio_user_spec.o 00:02:40.690 CC test/dma/test_dma/test_dma.o 00:02:40.690 CXX test/cpp_headers/vmd.o 00:02:40.690 CXX test/cpp_headers/vhost.o 00:02:40.690 CXX test/cpp_headers/zipf.o 00:02:40.690 CXX test/cpp_headers/xor.o 00:02:40.690 CC test/env/memory/memory_ut.o 00:02:40.690 CC test/app/histogram_perf/histogram_perf.o 00:02:40.690 CC test/app/bdev_svc/bdev_svc.o 00:02:40.690 CC app/fio/bdev/fio_plugin.o 00:02:40.690 LINK rpc_client_test 00:02:40.690 LINK interrupt_tgt 00:02:40.959 LINK spdk_nvme_discover 00:02:40.959 LINK spdk_trace_record 00:02:40.959 LINK nvmf_tgt 00:02:40.959 LINK iscsi_tgt 00:02:41.223 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:41.223 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:41.223 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:41.223 LINK spdk_tgt 00:02:41.223 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:41.223 CC test/env/mem_callbacks/mem_callbacks.o 00:02:41.223 LINK jsoncat 00:02:41.223 LINK spdk_dd 00:02:41.223 LINK stub 00:02:41.483 LINK poller_perf 00:02:41.483 LINK bdev_svc 00:02:41.483 LINK spdk_trace 00:02:41.483 LINK vtophys 00:02:41.483 LINK zipf 00:02:41.743 LINK env_dpdk_post_init 00:02:41.743 LINK histogram_perf 00:02:41.743 LINK verify 00:02:41.743 LINK ioat_perf 00:02:42.004 LINK spdk_nvme_identify 00:02:42.004 LINK pci_ut 00:02:42.004 LINK nvme_fuzz 00:02:42.004 LINK vhost_fuzz 00:02:42.004 LINK test_dma 00:02:42.004 CC app/vhost/vhost.o 00:02:42.004 LINK spdk_nvme 00:02:42.004 LINK spdk_bdev 00:02:42.264 CC examples/idxd/perf/perf.o 00:02:42.264 CC test/event/reactor/reactor.o 00:02:42.264 CC test/event/reactor_perf/reactor_perf.o 00:02:42.264 CC examples/sock/hello_world/hello_sock.o 00:02:42.264 LINK spdk_nvme_perf 00:02:42.264 CC test/event/event_perf/event_perf.o 00:02:42.264 CC examples/vmd/lsvmd/lsvmd.o 00:02:42.264 CC test/event/app_repeat/app_repeat.o 00:02:42.264 CC examples/vmd/led/led.o 00:02:42.264 LINK mem_callbacks 00:02:42.264 LINK spdk_top 00:02:42.264 CC examples/thread/thread/thread_ex.o 00:02:42.264 CC test/event/scheduler/scheduler.o 00:02:42.264 LINK vhost 00:02:42.264 LINK reactor 00:02:42.264 LINK lsvmd 00:02:42.264 LINK reactor_perf 00:02:42.265 LINK event_perf 00:02:42.265 LINK led 00:02:42.525 LINK app_repeat 00:02:42.525 LINK hello_sock 00:02:42.525 LINK thread 00:02:42.525 LINK idxd_perf 00:02:42.525 LINK scheduler 00:02:42.525 CC test/nvme/aer/aer.o 00:02:42.525 CC test/nvme/err_injection/err_injection.o 00:02:42.525 CC test/nvme/overhead/overhead.o 00:02:42.525 CC test/nvme/sgl/sgl.o 00:02:42.786 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:42.786 CC test/nvme/e2edp/nvme_dp.o 00:02:42.786 CC test/nvme/fused_ordering/fused_ordering.o 00:02:42.786 CC test/nvme/reset/reset.o 00:02:42.786 CC test/nvme/connect_stress/connect_stress.o 00:02:42.786 CC test/nvme/boot_partition/boot_partition.o 00:02:42.786 CC test/nvme/reserve/reserve.o 00:02:42.786 CC test/nvme/startup/startup.o 00:02:42.786 CC test/nvme/compliance/nvme_compliance.o 00:02:42.786 CC test/nvme/simple_copy/simple_copy.o 00:02:42.786 CC test/blobfs/mkfs/mkfs.o 00:02:42.786 CC test/nvme/fdp/fdp.o 00:02:42.786 LINK memory_ut 00:02:42.786 CC test/nvme/cuse/cuse.o 00:02:42.786 CC test/accel/dif/dif.o 00:02:42.786 CC test/lvol/esnap/esnap.o 00:02:42.786 LINK boot_partition 00:02:42.786 LINK startup 00:02:42.786 LINK connect_stress 00:02:42.786 LINK fused_ordering 00:02:42.786 LINK err_injection 00:02:43.047 LINK reserve 00:02:43.047 LINK doorbell_aers 00:02:43.047 LINK mkfs 00:02:43.047 LINK simple_copy 00:02:43.047 LINK aer 00:02:43.047 LINK reset 00:02:43.047 CC examples/nvme/reconnect/reconnect.o 00:02:43.047 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:43.047 CC examples/nvme/hello_world/hello_world.o 00:02:43.047 LINK sgl 00:02:43.047 LINK overhead 00:02:43.047 LINK nvme_dp 00:02:43.047 CC examples/nvme/abort/abort.o 00:02:43.047 CC examples/nvme/hotplug/hotplug.o 00:02:43.047 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:43.047 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:43.047 CC examples/nvme/arbitration/arbitration.o 00:02:43.047 LINK nvme_compliance 00:02:43.047 LINK fdp 00:02:43.047 CC examples/accel/perf/accel_perf.o 00:02:43.047 LINK iscsi_fuzz 00:02:43.047 CC examples/blob/cli/blobcli.o 00:02:43.047 CC examples/blob/hello_world/hello_blob.o 00:02:43.047 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:43.308 LINK pmr_persistence 00:02:43.308 LINK hello_world 00:02:43.308 LINK cmb_copy 00:02:43.308 LINK hotplug 00:02:43.308 LINK reconnect 00:02:43.308 LINK arbitration 00:02:43.308 LINK abort 00:02:43.308 LINK dif 00:02:43.308 LINK hello_blob 00:02:43.569 LINK hello_fsdev 00:02:43.569 LINK nvme_manage 00:02:43.569 LINK accel_perf 00:02:43.569 LINK blobcli 00:02:43.831 LINK cuse 00:02:44.092 CC test/bdev/bdevio/bdevio.o 00:02:44.092 CC examples/bdev/hello_world/hello_bdev.o 00:02:44.092 CC examples/bdev/bdevperf/bdevperf.o 00:02:44.353 LINK bdevio 00:02:44.353 LINK hello_bdev 00:02:44.924 LINK bdevperf 00:02:45.495 CC examples/nvmf/nvmf/nvmf.o 00:02:45.756 LINK nvmf 00:02:46.329 LINK esnap 00:02:46.903 00:02:46.903 real 0m55.478s 00:02:46.903 user 8m8.140s 00:02:46.903 sys 5m30.381s 00:02:46.903 06:45:46 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:46.903 06:45:46 make -- common/autotest_common.sh@10 -- $ set +x 00:02:46.903 ************************************ 00:02:46.903 END TEST make 00:02:46.903 ************************************ 00:02:46.903 06:45:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:46.903 06:45:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:46.903 06:45:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:46.903 06:45:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.903 06:45:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:46.903 06:45:46 -- pm/common@44 -- $ pid=2799359 00:02:46.903 06:45:46 -- pm/common@50 -- $ kill -TERM 2799359 00:02:46.903 06:45:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.903 06:45:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:46.903 06:45:46 -- pm/common@44 -- $ pid=2799360 00:02:46.903 06:45:46 -- pm/common@50 -- $ kill -TERM 2799360 00:02:46.903 06:45:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.903 06:45:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:46.903 06:45:46 -- pm/common@44 -- $ pid=2799362 00:02:46.903 06:45:46 -- pm/common@50 -- $ kill -TERM 2799362 00:02:46.903 06:45:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.903 06:45:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:46.903 06:45:46 -- pm/common@44 -- $ pid=2799385 00:02:46.903 06:45:46 -- pm/common@50 -- $ sudo -E kill -TERM 2799385 00:02:46.903 06:45:46 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:46.903 06:45:46 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:46.903 06:45:46 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:46.903 06:45:46 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:46.903 06:45:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:46.903 06:45:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:46.903 06:45:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:46.903 06:45:46 -- scripts/common.sh@336 -- # IFS=.-: 00:02:46.903 06:45:46 -- scripts/common.sh@336 -- # read -ra ver1 00:02:46.903 06:45:46 -- scripts/common.sh@337 -- # IFS=.-: 00:02:46.903 06:45:46 -- scripts/common.sh@337 -- # read -ra ver2 00:02:46.903 06:45:46 -- scripts/common.sh@338 -- # local 'op=<' 00:02:46.903 06:45:46 -- scripts/common.sh@340 -- # ver1_l=2 00:02:46.903 06:45:46 -- scripts/common.sh@341 -- # ver2_l=1 00:02:46.903 06:45:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:46.903 06:45:46 -- scripts/common.sh@344 -- # case "$op" in 00:02:46.903 06:45:46 -- scripts/common.sh@345 -- # : 1 00:02:46.903 06:45:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:46.903 06:45:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.903 06:45:46 -- scripts/common.sh@365 -- # decimal 1 00:02:46.904 06:45:46 -- scripts/common.sh@353 -- # local d=1 00:02:46.904 06:45:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:46.904 06:45:46 -- scripts/common.sh@355 -- # echo 1 00:02:46.904 06:45:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:46.904 06:45:46 -- scripts/common.sh@366 -- # decimal 2 00:02:46.904 06:45:46 -- scripts/common.sh@353 -- # local d=2 00:02:46.904 06:45:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:46.904 06:45:46 -- scripts/common.sh@355 -- # echo 2 00:02:46.904 06:45:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:46.904 06:45:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:46.904 06:45:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:46.904 06:45:46 -- scripts/common.sh@368 -- # return 0 00:02:46.904 06:45:46 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:46.904 06:45:46 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:46.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.904 --rc genhtml_branch_coverage=1 00:02:46.904 --rc genhtml_function_coverage=1 00:02:46.904 --rc genhtml_legend=1 00:02:46.904 --rc geninfo_all_blocks=1 00:02:46.904 --rc geninfo_unexecuted_blocks=1 00:02:46.904 00:02:46.904 ' 00:02:46.904 06:45:46 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:46.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.904 --rc genhtml_branch_coverage=1 00:02:46.904 --rc genhtml_function_coverage=1 00:02:46.904 --rc genhtml_legend=1 00:02:46.904 --rc geninfo_all_blocks=1 00:02:46.904 --rc geninfo_unexecuted_blocks=1 00:02:46.904 00:02:46.904 ' 00:02:46.904 06:45:46 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:46.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.904 --rc genhtml_branch_coverage=1 00:02:46.904 --rc genhtml_function_coverage=1 00:02:46.904 --rc genhtml_legend=1 00:02:46.904 --rc geninfo_all_blocks=1 00:02:46.904 --rc geninfo_unexecuted_blocks=1 00:02:46.904 00:02:46.904 ' 00:02:46.904 06:45:46 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:46.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.904 --rc genhtml_branch_coverage=1 00:02:46.904 --rc genhtml_function_coverage=1 00:02:46.904 --rc genhtml_legend=1 00:02:46.904 --rc geninfo_all_blocks=1 00:02:46.904 --rc geninfo_unexecuted_blocks=1 00:02:46.904 00:02:46.904 ' 00:02:46.904 06:45:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:46.904 06:45:46 -- nvmf/common.sh@7 -- # uname -s 00:02:47.166 06:45:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.166 06:45:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.166 06:45:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.166 06:45:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.166 06:45:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.166 06:45:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.166 06:45:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.166 06:45:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.166 06:45:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.166 06:45:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.166 06:45:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:47.166 06:45:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:47.166 06:45:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.166 06:45:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.166 06:45:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:47.166 06:45:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:47.166 06:45:46 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:47.166 06:45:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:47.166 06:45:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.166 06:45:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.166 06:45:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.166 06:45:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.166 06:45:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.166 06:45:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.166 06:45:46 -- paths/export.sh@5 -- # export PATH 00:02:47.166 06:45:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.166 06:45:46 -- nvmf/common.sh@51 -- # : 0 00:02:47.166 06:45:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:47.166 06:45:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:47.166 06:45:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:47.166 06:45:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.166 06:45:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.166 06:45:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:47.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:47.166 06:45:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:47.166 06:45:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:47.166 06:45:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:47.166 06:45:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.166 06:45:46 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.166 06:45:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.167 06:45:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.167 06:45:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:47.167 06:45:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.167 06:45:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:47.167 06:45:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.167 06:45:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.167 06:45:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.167 06:45:46 -- spdk/autotest.sh@48 -- # udevadm_pid=2865438 00:02:47.167 06:45:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:47.167 06:45:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.167 06:45:46 -- pm/common@17 -- # local monitor 00:02:47.167 06:45:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.167 06:45:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.167 06:45:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.167 06:45:46 -- pm/common@21 -- # date +%s 00:02:47.167 06:45:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.167 06:45:46 -- pm/common@21 -- # date +%s 00:02:47.167 06:45:46 -- pm/common@25 -- # sleep 1 00:02:47.167 06:45:46 -- pm/common@21 -- # date +%s 00:02:47.167 06:45:46 -- pm/common@21 -- # date +%s 00:02:47.167 06:45:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729053946 00:02:47.167 06:45:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729053946 00:02:47.167 06:45:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729053946 00:02:47.167 06:45:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729053946 00:02:47.167 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729053946_collect-cpu-load.pm.log 00:02:47.167 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729053946_collect-vmstat.pm.log 00:02:47.167 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729053946_collect-cpu-temp.pm.log 00:02:47.167 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729053946_collect-bmc-pm.bmc.pm.log 00:02:48.110 06:45:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:48.110 06:45:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:48.110 06:45:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:48.110 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:02:48.110 06:45:47 -- spdk/autotest.sh@59 -- # create_test_list 00:02:48.110 06:45:47 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:48.110 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:02:48.110 06:45:47 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:48.110 06:45:47 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:48.110 06:45:47 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:48.110 06:45:47 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:48.110 06:45:47 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:48.110 06:45:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:48.110 06:45:47 -- common/autotest_common.sh@1455 -- # uname 00:02:48.110 06:45:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:48.110 06:45:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:48.110 06:45:47 -- common/autotest_common.sh@1475 -- # uname 00:02:48.110 06:45:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:48.110 06:45:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:48.110 06:45:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:48.371 lcov: LCOV version 1.15 00:02:48.371 06:45:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:10.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:10.345 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:18.491 06:46:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:18.491 06:46:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:18.491 06:46:17 -- common/autotest_common.sh@10 -- # set +x 00:03:18.491 06:46:17 -- spdk/autotest.sh@78 -- # rm -f 00:03:18.491 06:46:17 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.888 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:21.888 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:21.888 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:21.888 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:21.888 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:21.888 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:22.173 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:22.173 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:22.434 06:46:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:22.434 06:46:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:22.434 06:46:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:22.434 06:46:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:22.434 06:46:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:22.434 06:46:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:22.434 06:46:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:22.434 06:46:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:22.434 06:46:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:22.434 06:46:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:22.434 06:46:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:22.434 06:46:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:22.434 06:46:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:22.434 06:46:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:22.434 06:46:21 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:22.696 No valid GPT data, bailing 00:03:22.696 06:46:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:22.696 06:46:21 -- scripts/common.sh@394 -- # pt= 00:03:22.696 06:46:21 -- scripts/common.sh@395 -- # return 1 00:03:22.696 06:46:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:22.696 1+0 records in 00:03:22.696 1+0 records out 00:03:22.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00629527 s, 167 MB/s 00:03:22.696 06:46:21 -- spdk/autotest.sh@105 -- # sync 00:03:22.696 06:46:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:22.696 06:46:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:22.696 06:46:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:32.705 06:46:30 -- spdk/autotest.sh@111 -- # uname -s 00:03:32.705 06:46:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:32.705 06:46:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:32.705 06:46:30 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:34.622 Hugepages 00:03:34.622 node hugesize free / total 00:03:34.622 node0 1048576kB 0 / 0 00:03:34.622 node0 2048kB 0 / 0 00:03:34.622 node1 1048576kB 0 / 0 00:03:34.622 node1 2048kB 0 / 0 00:03:34.622 00:03:34.622 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:34.622 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:34.622 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:34.622 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:34.622 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:34.622 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:34.622 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:34.622 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:34.622 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:34.884 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:34.884 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:34.884 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:34.884 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:34.884 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:34.884 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:34.884 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:34.884 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:34.884 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:34.884 06:46:34 -- spdk/autotest.sh@117 -- # uname -s 00:03:34.884 06:46:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:34.884 06:46:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:34.884 06:46:34 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.191 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.191 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.191 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.191 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.191 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.453 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:40.368 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:40.630 06:46:39 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:41.574 06:46:40 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:41.574 06:46:40 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:41.574 06:46:40 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:41.574 06:46:40 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:41.574 06:46:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:41.574 06:46:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:41.574 06:46:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:41.574 06:46:40 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:41.574 06:46:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:41.574 06:46:41 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:41.574 06:46:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:41.574 06:46:41 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.877 Waiting for block devices as requested 00:03:45.138 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:45.138 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:45.138 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:45.400 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:45.400 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:45.400 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:45.661 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:45.661 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:45.661 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:45.922 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:45.922 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:45.922 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:46.184 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:46.184 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:46.184 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:46.446 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:46.446 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:46.708 06:46:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:46.708 06:46:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:46.708 06:46:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:46.708 06:46:46 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:46.708 06:46:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:46.708 06:46:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:46.708 06:46:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:46.708 06:46:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:46.708 06:46:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:46.708 06:46:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:46.708 06:46:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:46.708 06:46:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:46.708 06:46:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:46.708 06:46:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:46.708 06:46:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:46.708 06:46:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:46.708 06:46:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:46.708 06:46:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:46.708 06:46:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:46.708 06:46:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:46.708 06:46:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:46.708 06:46:46 -- common/autotest_common.sh@1541 -- # continue 00:03:46.708 06:46:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:46.708 06:46:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:46.708 06:46:46 -- common/autotest_common.sh@10 -- # set +x 00:03:46.970 06:46:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:46.970 06:46:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.970 06:46:46 -- common/autotest_common.sh@10 -- # set +x 00:03:46.970 06:46:46 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.272 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:50.272 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:50.272 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:50.272 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:50.272 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:50.272 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:50.273 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:50.273 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:50.273 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:50.533 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:50.533 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:50.533 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:50.533 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:50.533 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:50.533 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:50.533 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:50.533 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:50.794 06:46:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:50.794 06:46:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:50.794 06:46:50 -- common/autotest_common.sh@10 -- # set +x 00:03:51.055 06:46:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:51.055 06:46:50 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:51.055 06:46:50 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:51.055 06:46:50 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:51.055 06:46:50 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:51.055 06:46:50 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:51.055 06:46:50 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:51.055 06:46:50 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:51.055 06:46:50 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:51.055 06:46:50 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:51.055 06:46:50 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.055 06:46:50 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.055 06:46:50 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:51.055 06:46:50 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:51.055 06:46:50 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:51.055 06:46:50 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:51.055 06:46:50 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:51.055 06:46:50 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:51.055 06:46:50 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:51.055 06:46:50 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:51.055 06:46:50 -- common/autotest_common.sh@1570 -- # return 0 00:03:51.055 06:46:50 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:51.055 06:46:50 -- common/autotest_common.sh@1578 -- # return 0 00:03:51.055 06:46:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:51.055 06:46:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:51.055 06:46:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:51.055 06:46:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:51.055 06:46:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:51.055 06:46:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:51.055 06:46:50 -- common/autotest_common.sh@10 -- # set +x 00:03:51.055 06:46:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:51.055 06:46:50 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:51.055 06:46:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.055 06:46:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.055 06:46:50 -- common/autotest_common.sh@10 -- # set +x 00:03:51.055 ************************************ 00:03:51.055 START TEST env 00:03:51.055 ************************************ 00:03:51.055 06:46:50 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:51.316 * Looking for test storage... 00:03:51.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:51.316 06:46:50 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:51.316 06:46:50 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:51.316 06:46:50 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:51.316 06:46:50 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:51.316 06:46:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.316 06:46:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.316 06:46:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.316 06:46:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.316 06:46:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.316 06:46:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.316 06:46:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.316 06:46:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.316 06:46:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.316 06:46:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.316 06:46:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.316 06:46:50 env -- scripts/common.sh@344 -- # case "$op" in 00:03:51.316 06:46:50 env -- scripts/common.sh@345 -- # : 1 00:03:51.316 06:46:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.316 06:46:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.316 06:46:50 env -- scripts/common.sh@365 -- # decimal 1 00:03:51.316 06:46:50 env -- scripts/common.sh@353 -- # local d=1 00:03:51.316 06:46:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.316 06:46:50 env -- scripts/common.sh@355 -- # echo 1 00:03:51.316 06:46:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.316 06:46:50 env -- scripts/common.sh@366 -- # decimal 2 00:03:51.316 06:46:50 env -- scripts/common.sh@353 -- # local d=2 00:03:51.316 06:46:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.316 06:46:50 env -- scripts/common.sh@355 -- # echo 2 00:03:51.316 06:46:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.316 06:46:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.316 06:46:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.316 06:46:50 env -- scripts/common.sh@368 -- # return 0 00:03:51.316 06:46:50 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.316 06:46:50 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:51.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.317 --rc genhtml_branch_coverage=1 00:03:51.317 --rc genhtml_function_coverage=1 00:03:51.317 --rc genhtml_legend=1 00:03:51.317 --rc geninfo_all_blocks=1 00:03:51.317 --rc geninfo_unexecuted_blocks=1 00:03:51.317 00:03:51.317 ' 00:03:51.317 06:46:50 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:51.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.317 --rc genhtml_branch_coverage=1 00:03:51.317 --rc genhtml_function_coverage=1 00:03:51.317 --rc genhtml_legend=1 00:03:51.317 --rc geninfo_all_blocks=1 00:03:51.317 --rc geninfo_unexecuted_blocks=1 00:03:51.317 00:03:51.317 ' 00:03:51.317 06:46:50 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:51.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.317 --rc genhtml_branch_coverage=1 00:03:51.317 --rc genhtml_function_coverage=1 00:03:51.317 --rc genhtml_legend=1 00:03:51.317 --rc geninfo_all_blocks=1 00:03:51.317 --rc geninfo_unexecuted_blocks=1 00:03:51.317 00:03:51.317 ' 00:03:51.317 06:46:50 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:51.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.317 --rc genhtml_branch_coverage=1 00:03:51.317 --rc genhtml_function_coverage=1 00:03:51.317 --rc genhtml_legend=1 00:03:51.317 --rc geninfo_all_blocks=1 00:03:51.317 --rc geninfo_unexecuted_blocks=1 00:03:51.317 00:03:51.317 ' 00:03:51.317 06:46:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:51.317 06:46:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.317 06:46:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.317 06:46:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.317 ************************************ 00:03:51.317 START TEST env_memory 00:03:51.317 ************************************ 00:03:51.317 06:46:50 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:51.317 00:03:51.317 00:03:51.317 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.317 http://cunit.sourceforge.net/ 00:03:51.317 00:03:51.317 00:03:51.317 Suite: memory 00:03:51.317 Test: alloc and free memory map ...[2024-10-16 06:46:50.758683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:51.317 passed 00:03:51.317 Test: mem map translation ...[2024-10-16 06:46:50.784279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:51.317 [2024-10-16 06:46:50.784308] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:51.317 [2024-10-16 06:46:50.784355] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:51.317 [2024-10-16 06:46:50.784363] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:51.578 passed 00:03:51.578 Test: mem map registration ...[2024-10-16 06:46:50.839639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:51.578 [2024-10-16 06:46:50.839663] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:51.578 passed 00:03:51.578 Test: mem map adjacent registrations ...passed 00:03:51.578 00:03:51.578 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.578 suites 1 1 n/a 0 0 00:03:51.578 tests 4 4 4 0 0 00:03:51.578 asserts 152 152 152 0 n/a 00:03:51.578 00:03:51.578 Elapsed time = 0.191 seconds 00:03:51.578 00:03:51.578 real 0m0.206s 00:03:51.578 user 0m0.196s 00:03:51.578 sys 0m0.009s 00:03:51.578 06:46:50 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.578 06:46:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:51.578 ************************************ 00:03:51.578 END TEST env_memory 00:03:51.578 ************************************ 00:03:51.578 06:46:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:51.578 06:46:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.578 06:46:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.578 06:46:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.578 ************************************ 00:03:51.578 START TEST env_vtophys 00:03:51.578 ************************************ 00:03:51.578 06:46:50 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:51.578 EAL: lib.eal log level changed from notice to debug 00:03:51.578 EAL: Detected lcore 0 as core 0 on socket 0 00:03:51.578 EAL: Detected lcore 1 as core 1 on socket 0 00:03:51.578 EAL: Detected lcore 2 as core 2 on socket 0 00:03:51.578 EAL: Detected lcore 3 as core 3 on socket 0 00:03:51.578 EAL: Detected lcore 4 as core 4 on socket 0 00:03:51.578 EAL: Detected lcore 5 as core 5 on socket 0 00:03:51.578 EAL: Detected lcore 6 as core 6 on socket 0 00:03:51.578 EAL: Detected lcore 7 as core 7 on socket 0 00:03:51.578 EAL: Detected lcore 8 as core 8 on socket 0 00:03:51.578 EAL: Detected lcore 9 as core 9 on socket 0 00:03:51.578 EAL: Detected lcore 10 as core 10 on socket 0 00:03:51.578 EAL: Detected lcore 11 as core 11 on socket 0 00:03:51.578 EAL: Detected lcore 12 as core 12 on socket 0 00:03:51.578 EAL: Detected lcore 13 as core 13 on socket 0 00:03:51.578 EAL: Detected lcore 14 as core 14 on socket 0 00:03:51.578 EAL: Detected lcore 15 as core 15 on socket 0 00:03:51.578 EAL: Detected lcore 16 as core 16 on socket 0 00:03:51.578 EAL: Detected lcore 17 as core 17 on socket 0 00:03:51.578 EAL: Detected lcore 18 as core 18 on socket 0 00:03:51.578 EAL: Detected lcore 19 as core 19 on socket 0 00:03:51.578 EAL: Detected lcore 20 as core 20 on socket 0 00:03:51.578 EAL: Detected lcore 21 as core 21 on socket 0 00:03:51.578 EAL: Detected lcore 22 as core 22 on socket 0 00:03:51.578 EAL: Detected lcore 23 as core 23 on socket 0 00:03:51.578 EAL: Detected lcore 24 as core 24 on socket 0 00:03:51.578 EAL: Detected lcore 25 as core 25 on socket 0 00:03:51.578 EAL: Detected lcore 26 as core 26 on socket 0 00:03:51.578 EAL: Detected lcore 27 as core 27 on socket 0 00:03:51.578 EAL: Detected lcore 28 as core 28 on socket 0 00:03:51.578 EAL: Detected lcore 29 as core 29 on socket 0 00:03:51.578 EAL: Detected lcore 30 as core 30 on socket 0 00:03:51.578 EAL: Detected lcore 31 as core 31 on socket 0 00:03:51.578 EAL: Detected lcore 32 as core 32 on socket 0 00:03:51.578 EAL: Detected lcore 33 as core 33 on socket 0 00:03:51.578 EAL: Detected lcore 34 as core 34 on socket 0 00:03:51.578 EAL: Detected lcore 35 as core 35 on socket 0 00:03:51.578 EAL: Detected lcore 36 as core 0 on socket 1 00:03:51.578 EAL: Detected lcore 37 as core 1 on socket 1 00:03:51.578 EAL: Detected lcore 38 as core 2 on socket 1 00:03:51.578 EAL: Detected lcore 39 as core 3 on socket 1 00:03:51.578 EAL: Detected lcore 40 as core 4 on socket 1 00:03:51.578 EAL: Detected lcore 41 as core 5 on socket 1 00:03:51.579 EAL: Detected lcore 42 as core 6 on socket 1 00:03:51.579 EAL: Detected lcore 43 as core 7 on socket 1 00:03:51.579 EAL: Detected lcore 44 as core 8 on socket 1 00:03:51.579 EAL: Detected lcore 45 as core 9 on socket 1 00:03:51.579 EAL: Detected lcore 46 as core 10 on socket 1 00:03:51.579 EAL: Detected lcore 47 as core 11 on socket 1 00:03:51.579 EAL: Detected lcore 48 as core 12 on socket 1 00:03:51.579 EAL: Detected lcore 49 as core 13 on socket 1 00:03:51.579 EAL: Detected lcore 50 as core 14 on socket 1 00:03:51.579 EAL: Detected lcore 51 as core 15 on socket 1 00:03:51.579 EAL: Detected lcore 52 as core 16 on socket 1 00:03:51.579 EAL: Detected lcore 53 as core 17 on socket 1 00:03:51.579 EAL: Detected lcore 54 as core 18 on socket 1 00:03:51.579 EAL: Detected lcore 55 as core 19 on socket 1 00:03:51.579 EAL: Detected lcore 56 as core 20 on socket 1 00:03:51.579 EAL: Detected lcore 57 as core 21 on socket 1 00:03:51.579 EAL: Detected lcore 58 as core 22 on socket 1 00:03:51.579 EAL: Detected lcore 59 as core 23 on socket 1 00:03:51.579 EAL: Detected lcore 60 as core 24 on socket 1 00:03:51.579 EAL: Detected lcore 61 as core 25 on socket 1 00:03:51.579 EAL: Detected lcore 62 as core 26 on socket 1 00:03:51.579 EAL: Detected lcore 63 as core 27 on socket 1 00:03:51.579 EAL: Detected lcore 64 as core 28 on socket 1 00:03:51.579 EAL: Detected lcore 65 as core 29 on socket 1 00:03:51.579 EAL: Detected lcore 66 as core 30 on socket 1 00:03:51.579 EAL: Detected lcore 67 as core 31 on socket 1 00:03:51.579 EAL: Detected lcore 68 as core 32 on socket 1 00:03:51.579 EAL: Detected lcore 69 as core 33 on socket 1 00:03:51.579 EAL: Detected lcore 70 as core 34 on socket 1 00:03:51.579 EAL: Detected lcore 71 as core 35 on socket 1 00:03:51.579 EAL: Detected lcore 72 as core 0 on socket 0 00:03:51.579 EAL: Detected lcore 73 as core 1 on socket 0 00:03:51.579 EAL: Detected lcore 74 as core 2 on socket 0 00:03:51.579 EAL: Detected lcore 75 as core 3 on socket 0 00:03:51.579 EAL: Detected lcore 76 as core 4 on socket 0 00:03:51.579 EAL: Detected lcore 77 as core 5 on socket 0 00:03:51.579 EAL: Detected lcore 78 as core 6 on socket 0 00:03:51.579 EAL: Detected lcore 79 as core 7 on socket 0 00:03:51.579 EAL: Detected lcore 80 as core 8 on socket 0 00:03:51.579 EAL: Detected lcore 81 as core 9 on socket 0 00:03:51.579 EAL: Detected lcore 82 as core 10 on socket 0 00:03:51.579 EAL: Detected lcore 83 as core 11 on socket 0 00:03:51.579 EAL: Detected lcore 84 as core 12 on socket 0 00:03:51.579 EAL: Detected lcore 85 as core 13 on socket 0 00:03:51.579 EAL: Detected lcore 86 as core 14 on socket 0 00:03:51.579 EAL: Detected lcore 87 as core 15 on socket 0 00:03:51.579 EAL: Detected lcore 88 as core 16 on socket 0 00:03:51.579 EAL: Detected lcore 89 as core 17 on socket 0 00:03:51.579 EAL: Detected lcore 90 as core 18 on socket 0 00:03:51.579 EAL: Detected lcore 91 as core 19 on socket 0 00:03:51.579 EAL: Detected lcore 92 as core 20 on socket 0 00:03:51.579 EAL: Detected lcore 93 as core 21 on socket 0 00:03:51.579 EAL: Detected lcore 94 as core 22 on socket 0 00:03:51.579 EAL: Detected lcore 95 as core 23 on socket 0 00:03:51.579 EAL: Detected lcore 96 as core 24 on socket 0 00:03:51.579 EAL: Detected lcore 97 as core 25 on socket 0 00:03:51.579 EAL: Detected lcore 98 as core 26 on socket 0 00:03:51.579 EAL: Detected lcore 99 as core 27 on socket 0 00:03:51.579 EAL: Detected lcore 100 as core 28 on socket 0 00:03:51.579 EAL: Detected lcore 101 as core 29 on socket 0 00:03:51.579 EAL: Detected lcore 102 as core 30 on socket 0 00:03:51.579 EAL: Detected lcore 103 as core 31 on socket 0 00:03:51.579 EAL: Detected lcore 104 as core 32 on socket 0 00:03:51.579 EAL: Detected lcore 105 as core 33 on socket 0 00:03:51.579 EAL: Detected lcore 106 as core 34 on socket 0 00:03:51.579 EAL: Detected lcore 107 as core 35 on socket 0 00:03:51.579 EAL: Detected lcore 108 as core 0 on socket 1 00:03:51.579 EAL: Detected lcore 109 as core 1 on socket 1 00:03:51.579 EAL: Detected lcore 110 as core 2 on socket 1 00:03:51.579 EAL: Detected lcore 111 as core 3 on socket 1 00:03:51.579 EAL: Detected lcore 112 as core 4 on socket 1 00:03:51.579 EAL: Detected lcore 113 as core 5 on socket 1 00:03:51.579 EAL: Detected lcore 114 as core 6 on socket 1 00:03:51.579 EAL: Detected lcore 115 as core 7 on socket 1 00:03:51.579 EAL: Detected lcore 116 as core 8 on socket 1 00:03:51.579 EAL: Detected lcore 117 as core 9 on socket 1 00:03:51.579 EAL: Detected lcore 118 as core 10 on socket 1 00:03:51.579 EAL: Detected lcore 119 as core 11 on socket 1 00:03:51.579 EAL: Detected lcore 120 as core 12 on socket 1 00:03:51.579 EAL: Detected lcore 121 as core 13 on socket 1 00:03:51.579 EAL: Detected lcore 122 as core 14 on socket 1 00:03:51.579 EAL: Detected lcore 123 as core 15 on socket 1 00:03:51.579 EAL: Detected lcore 124 as core 16 on socket 1 00:03:51.579 EAL: Detected lcore 125 as core 17 on socket 1 00:03:51.579 EAL: Detected lcore 126 as core 18 on socket 1 00:03:51.579 EAL: Detected lcore 127 as core 19 on socket 1 00:03:51.579 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:51.579 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:51.579 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:51.579 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:51.579 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:51.579 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:51.579 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:51.579 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:51.579 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:51.579 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:51.579 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:51.579 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:51.579 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:51.579 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:51.579 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:51.579 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:51.579 EAL: Maximum logical cores by configuration: 128 00:03:51.579 EAL: Detected CPU lcores: 128 00:03:51.579 EAL: Detected NUMA nodes: 2 00:03:51.579 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:51.579 EAL: Detected shared linkage of DPDK 00:03:51.579 EAL: No shared files mode enabled, IPC will be disabled 00:03:51.579 EAL: Bus pci wants IOVA as 'DC' 00:03:51.579 EAL: Buses did not request a specific IOVA mode. 00:03:51.579 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:51.579 EAL: Selected IOVA mode 'VA' 00:03:51.579 EAL: Probing VFIO support... 00:03:51.579 EAL: IOMMU type 1 (Type 1) is supported 00:03:51.579 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:51.579 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:51.579 EAL: VFIO support initialized 00:03:51.579 EAL: Ask a virtual area of 0x2e000 bytes 00:03:51.579 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:51.579 EAL: Setting up physically contiguous memory... 00:03:51.579 EAL: Setting maximum number of open files to 524288 00:03:51.579 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:51.579 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:51.579 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:51.579 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.579 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:51.579 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.579 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.579 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:51.579 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:51.579 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.579 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:51.579 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.579 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.579 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:51.579 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:51.579 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.579 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:51.579 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.579 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.579 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:51.579 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:51.579 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.579 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:51.579 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.579 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.579 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:51.579 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:51.579 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:51.579 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.579 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:51.579 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:51.579 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.579 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:51.579 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:51.579 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.579 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:51.579 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:51.579 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.579 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:51.579 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:51.579 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.579 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:51.579 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:51.579 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.579 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:51.580 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:51.580 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.580 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:51.580 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:51.580 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.580 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:51.580 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:51.580 EAL: Hugepages will be freed exactly as allocated. 00:03:51.580 EAL: No shared files mode enabled, IPC is disabled 00:03:51.580 EAL: No shared files mode enabled, IPC is disabled 00:03:51.580 EAL: TSC frequency is ~2400000 KHz 00:03:51.580 EAL: Main lcore 0 is ready (tid=7f42766c6a00;cpuset=[0]) 00:03:51.580 EAL: Trying to obtain current memory policy. 00:03:51.580 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.580 EAL: Restoring previous memory policy: 0 00:03:51.580 EAL: request: mp_malloc_sync 00:03:51.580 EAL: No shared files mode enabled, IPC is disabled 00:03:51.580 EAL: Heap on socket 0 was expanded by 2MB 00:03:51.580 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:51.840 EAL: Mem event callback 'spdk:(nil)' registered 00:03:51.840 00:03:51.840 00:03:51.840 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.840 http://cunit.sourceforge.net/ 00:03:51.840 00:03:51.840 00:03:51.840 Suite: components_suite 00:03:51.840 Test: vtophys_malloc_test ...passed 00:03:51.840 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:51.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.840 EAL: Restoring previous memory policy: 4 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was expanded by 4MB 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was shrunk by 4MB 00:03:51.840 EAL: Trying to obtain current memory policy. 00:03:51.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.840 EAL: Restoring previous memory policy: 4 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was expanded by 6MB 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was shrunk by 6MB 00:03:51.840 EAL: Trying to obtain current memory policy. 00:03:51.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.840 EAL: Restoring previous memory policy: 4 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was expanded by 10MB 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was shrunk by 10MB 00:03:51.840 EAL: Trying to obtain current memory policy. 00:03:51.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.840 EAL: Restoring previous memory policy: 4 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was expanded by 18MB 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was shrunk by 18MB 00:03:51.840 EAL: Trying to obtain current memory policy. 00:03:51.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.840 EAL: Restoring previous memory policy: 4 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was expanded by 34MB 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was shrunk by 34MB 00:03:51.840 EAL: Trying to obtain current memory policy. 00:03:51.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.840 EAL: Restoring previous memory policy: 4 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was expanded by 66MB 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was shrunk by 66MB 00:03:51.840 EAL: Trying to obtain current memory policy. 00:03:51.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.840 EAL: Restoring previous memory policy: 4 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.840 EAL: Heap on socket 0 was expanded by 130MB 00:03:51.840 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.840 EAL: request: mp_malloc_sync 00:03:51.840 EAL: No shared files mode enabled, IPC is disabled 00:03:51.841 EAL: Heap on socket 0 was shrunk by 130MB 00:03:51.841 EAL: Trying to obtain current memory policy. 00:03:51.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.841 EAL: Restoring previous memory policy: 4 00:03:51.841 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.841 EAL: request: mp_malloc_sync 00:03:51.841 EAL: No shared files mode enabled, IPC is disabled 00:03:51.841 EAL: Heap on socket 0 was expanded by 258MB 00:03:51.841 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.841 EAL: request: mp_malloc_sync 00:03:51.841 EAL: No shared files mode enabled, IPC is disabled 00:03:51.841 EAL: Heap on socket 0 was shrunk by 258MB 00:03:51.841 EAL: Trying to obtain current memory policy. 00:03:51.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.101 EAL: Restoring previous memory policy: 4 00:03:52.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.101 EAL: request: mp_malloc_sync 00:03:52.101 EAL: No shared files mode enabled, IPC is disabled 00:03:52.101 EAL: Heap on socket 0 was expanded by 514MB 00:03:52.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.101 EAL: request: mp_malloc_sync 00:03:52.101 EAL: No shared files mode enabled, IPC is disabled 00:03:52.101 EAL: Heap on socket 0 was shrunk by 514MB 00:03:52.101 EAL: Trying to obtain current memory policy. 00:03:52.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.362 EAL: Restoring previous memory policy: 4 00:03:52.362 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.362 EAL: request: mp_malloc_sync 00:03:52.362 EAL: No shared files mode enabled, IPC is disabled 00:03:52.362 EAL: Heap on socket 0 was expanded by 1026MB 00:03:52.362 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.362 EAL: request: mp_malloc_sync 00:03:52.362 EAL: No shared files mode enabled, IPC is disabled 00:03:52.362 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:52.362 passed 00:03:52.362 00:03:52.362 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.362 suites 1 1 n/a 0 0 00:03:52.362 tests 2 2 2 0 0 00:03:52.362 asserts 497 497 497 0 n/a 00:03:52.362 00:03:52.362 Elapsed time = 0.685 seconds 00:03:52.362 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.362 EAL: request: mp_malloc_sync 00:03:52.362 EAL: No shared files mode enabled, IPC is disabled 00:03:52.362 EAL: Heap on socket 0 was shrunk by 2MB 00:03:52.362 EAL: No shared files mode enabled, IPC is disabled 00:03:52.362 EAL: No shared files mode enabled, IPC is disabled 00:03:52.362 EAL: No shared files mode enabled, IPC is disabled 00:03:52.362 00:03:52.362 real 0m0.826s 00:03:52.362 user 0m0.431s 00:03:52.362 sys 0m0.365s 00:03:52.362 06:46:51 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.362 06:46:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:52.362 ************************************ 00:03:52.362 END TEST env_vtophys 00:03:52.362 ************************************ 00:03:52.362 06:46:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:52.622 06:46:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.622 06:46:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.622 06:46:51 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.622 ************************************ 00:03:52.622 START TEST env_pci 00:03:52.622 ************************************ 00:03:52.622 06:46:51 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:52.622 00:03:52.622 00:03:52.622 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.622 http://cunit.sourceforge.net/ 00:03:52.622 00:03:52.622 00:03:52.622 Suite: pci 00:03:52.622 Test: pci_hook ...[2024-10-16 06:46:51.920577] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2884830 has claimed it 00:03:52.622 EAL: Cannot find device (10000:00:01.0) 00:03:52.622 EAL: Failed to attach device on primary process 00:03:52.622 passed 00:03:52.622 00:03:52.622 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.622 suites 1 1 n/a 0 0 00:03:52.622 tests 1 1 1 0 0 00:03:52.622 asserts 25 25 25 0 n/a 00:03:52.622 00:03:52.622 Elapsed time = 0.029 seconds 00:03:52.622 00:03:52.622 real 0m0.050s 00:03:52.622 user 0m0.013s 00:03:52.622 sys 0m0.036s 00:03:52.622 06:46:51 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.622 06:46:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:52.623 ************************************ 00:03:52.623 END TEST env_pci 00:03:52.623 ************************************ 00:03:52.623 06:46:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:52.623 06:46:51 env -- env/env.sh@15 -- # uname 00:03:52.623 06:46:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:52.623 06:46:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:52.623 06:46:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:52.623 06:46:52 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:52.623 06:46:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.623 06:46:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.623 ************************************ 00:03:52.623 START TEST env_dpdk_post_init 00:03:52.623 ************************************ 00:03:52.623 06:46:52 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:52.623 EAL: Detected CPU lcores: 128 00:03:52.623 EAL: Detected NUMA nodes: 2 00:03:52.623 EAL: Detected shared linkage of DPDK 00:03:52.623 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:52.623 EAL: Selected IOVA mode 'VA' 00:03:52.623 EAL: VFIO support initialized 00:03:52.623 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:52.883 EAL: Using IOMMU type 1 (Type 1) 00:03:52.883 EAL: Ignore mapping IO port bar(1) 00:03:53.144 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:53.144 EAL: Ignore mapping IO port bar(1) 00:03:53.144 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:53.404 EAL: Ignore mapping IO port bar(1) 00:03:53.404 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:53.665 EAL: Ignore mapping IO port bar(1) 00:03:53.665 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:53.925 EAL: Ignore mapping IO port bar(1) 00:03:53.925 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:53.925 EAL: Ignore mapping IO port bar(1) 00:03:54.186 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:54.186 EAL: Ignore mapping IO port bar(1) 00:03:54.446 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:54.446 EAL: Ignore mapping IO port bar(1) 00:03:54.707 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:54.707 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:54.967 EAL: Ignore mapping IO port bar(1) 00:03:54.967 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:55.228 EAL: Ignore mapping IO port bar(1) 00:03:55.228 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:55.488 EAL: Ignore mapping IO port bar(1) 00:03:55.488 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:55.749 EAL: Ignore mapping IO port bar(1) 00:03:55.749 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:55.749 EAL: Ignore mapping IO port bar(1) 00:03:56.010 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:56.010 EAL: Ignore mapping IO port bar(1) 00:03:56.270 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:56.270 EAL: Ignore mapping IO port bar(1) 00:03:56.270 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:56.530 EAL: Ignore mapping IO port bar(1) 00:03:56.530 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:56.530 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:56.530 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:56.790 Starting DPDK initialization... 00:03:56.790 Starting SPDK post initialization... 00:03:56.790 SPDK NVMe probe 00:03:56.790 Attaching to 0000:65:00.0 00:03:56.790 Attached to 0000:65:00.0 00:03:56.790 Cleaning up... 00:03:58.703 00:03:58.703 real 0m5.740s 00:03:58.703 user 0m0.095s 00:03:58.703 sys 0m0.196s 00:03:58.703 06:46:57 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.703 06:46:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:58.703 ************************************ 00:03:58.703 END TEST env_dpdk_post_init 00:03:58.703 ************************************ 00:03:58.703 06:46:57 env -- env/env.sh@26 -- # uname 00:03:58.703 06:46:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:58.703 06:46:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.703 06:46:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.703 06:46:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.704 06:46:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.704 ************************************ 00:03:58.704 START TEST env_mem_callbacks 00:03:58.704 ************************************ 00:03:58.704 06:46:57 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.704 EAL: Detected CPU lcores: 128 00:03:58.704 EAL: Detected NUMA nodes: 2 00:03:58.704 EAL: Detected shared linkage of DPDK 00:03:58.704 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.704 EAL: Selected IOVA mode 'VA' 00:03:58.704 EAL: VFIO support initialized 00:03:58.704 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.704 00:03:58.704 00:03:58.704 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.704 http://cunit.sourceforge.net/ 00:03:58.704 00:03:58.704 00:03:58.704 Suite: memory 00:03:58.704 Test: test ... 00:03:58.704 register 0x200000200000 2097152 00:03:58.704 malloc 3145728 00:03:58.704 register 0x200000400000 4194304 00:03:58.704 buf 0x200000500000 len 3145728 PASSED 00:03:58.704 malloc 64 00:03:58.704 buf 0x2000004fff40 len 64 PASSED 00:03:58.704 malloc 4194304 00:03:58.704 register 0x200000800000 6291456 00:03:58.704 buf 0x200000a00000 len 4194304 PASSED 00:03:58.704 free 0x200000500000 3145728 00:03:58.704 free 0x2000004fff40 64 00:03:58.704 unregister 0x200000400000 4194304 PASSED 00:03:58.704 free 0x200000a00000 4194304 00:03:58.704 unregister 0x200000800000 6291456 PASSED 00:03:58.704 malloc 8388608 00:03:58.704 register 0x200000400000 10485760 00:03:58.704 buf 0x200000600000 len 8388608 PASSED 00:03:58.704 free 0x200000600000 8388608 00:03:58.704 unregister 0x200000400000 10485760 PASSED 00:03:58.704 passed 00:03:58.704 00:03:58.704 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.704 suites 1 1 n/a 0 0 00:03:58.704 tests 1 1 1 0 0 00:03:58.704 asserts 15 15 15 0 n/a 00:03:58.704 00:03:58.704 Elapsed time = 0.010 seconds 00:03:58.704 00:03:58.704 real 0m0.067s 00:03:58.704 user 0m0.018s 00:03:58.704 sys 0m0.048s 00:03:58.704 06:46:57 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.704 06:46:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:58.704 ************************************ 00:03:58.704 END TEST env_mem_callbacks 00:03:58.704 ************************************ 00:03:58.704 00:03:58.704 real 0m7.512s 00:03:58.704 user 0m1.016s 00:03:58.704 sys 0m1.052s 00:03:58.704 06:46:57 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.704 06:46:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.704 ************************************ 00:03:58.704 END TEST env 00:03:58.704 ************************************ 00:03:58.704 06:46:58 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:58.704 06:46:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.704 06:46:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.704 06:46:58 -- common/autotest_common.sh@10 -- # set +x 00:03:58.704 ************************************ 00:03:58.704 START TEST rpc 00:03:58.704 ************************************ 00:03:58.704 06:46:58 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:58.704 * Looking for test storage... 00:03:58.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:58.704 06:46:58 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:58.704 06:46:58 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:58.704 06:46:58 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:58.965 06:46:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.965 06:46:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.965 06:46:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.965 06:46:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.965 06:46:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.965 06:46:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.965 06:46:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.965 06:46:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.965 06:46:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.965 06:46:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.965 06:46:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.965 06:46:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:58.965 06:46:58 rpc -- scripts/common.sh@345 -- # : 1 00:03:58.965 06:46:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.965 06:46:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.965 06:46:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:58.965 06:46:58 rpc -- scripts/common.sh@353 -- # local d=1 00:03:58.965 06:46:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.965 06:46:58 rpc -- scripts/common.sh@355 -- # echo 1 00:03:58.965 06:46:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.965 06:46:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:58.965 06:46:58 rpc -- scripts/common.sh@353 -- # local d=2 00:03:58.965 06:46:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.965 06:46:58 rpc -- scripts/common.sh@355 -- # echo 2 00:03:58.965 06:46:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.965 06:46:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.965 06:46:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.965 06:46:58 rpc -- scripts/common.sh@368 -- # return 0 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:58.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.965 --rc genhtml_branch_coverage=1 00:03:58.965 --rc genhtml_function_coverage=1 00:03:58.965 --rc genhtml_legend=1 00:03:58.965 --rc geninfo_all_blocks=1 00:03:58.965 --rc geninfo_unexecuted_blocks=1 00:03:58.965 00:03:58.965 ' 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:58.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.965 --rc genhtml_branch_coverage=1 00:03:58.965 --rc genhtml_function_coverage=1 00:03:58.965 --rc genhtml_legend=1 00:03:58.965 --rc geninfo_all_blocks=1 00:03:58.965 --rc geninfo_unexecuted_blocks=1 00:03:58.965 00:03:58.965 ' 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:58.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.965 --rc genhtml_branch_coverage=1 00:03:58.965 --rc genhtml_function_coverage=1 00:03:58.965 --rc genhtml_legend=1 00:03:58.965 --rc geninfo_all_blocks=1 00:03:58.965 --rc geninfo_unexecuted_blocks=1 00:03:58.965 00:03:58.965 ' 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:58.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.965 --rc genhtml_branch_coverage=1 00:03:58.965 --rc genhtml_function_coverage=1 00:03:58.965 --rc genhtml_legend=1 00:03:58.965 --rc geninfo_all_blocks=1 00:03:58.965 --rc geninfo_unexecuted_blocks=1 00:03:58.965 00:03:58.965 ' 00:03:58.965 06:46:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2886213 00:03:58.965 06:46:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.965 06:46:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2886213 00:03:58.965 06:46:58 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@831 -- # '[' -z 2886213 ']' 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:58.965 06:46:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.965 [2024-10-16 06:46:58.321391] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:03:58.965 [2024-10-16 06:46:58.321467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886213 ] 00:03:58.965 [2024-10-16 06:46:58.406023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.965 [2024-10-16 06:46:58.458092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:58.965 [2024-10-16 06:46:58.458149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2886213' to capture a snapshot of events at runtime. 00:03:58.965 [2024-10-16 06:46:58.458158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:58.965 [2024-10-16 06:46:58.458166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:58.965 [2024-10-16 06:46:58.458172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2886213 for offline analysis/debug. 00:03:58.965 [2024-10-16 06:46:58.459059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.910 06:46:59 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:59.910 06:46:59 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:59.910 06:46:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.910 06:46:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.910 06:46:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:59.910 06:46:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:59.910 06:46:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.910 06:46:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.910 06:46:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.910 ************************************ 00:03:59.910 START TEST rpc_integrity 00:03:59.910 ************************************ 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.910 { 00:03:59.910 "name": "Malloc0", 00:03:59.910 "aliases": [ 00:03:59.910 "5c303563-93a8-4d18-b305-252d13b11103" 00:03:59.910 ], 00:03:59.910 "product_name": "Malloc disk", 00:03:59.910 "block_size": 512, 00:03:59.910 "num_blocks": 16384, 00:03:59.910 "uuid": "5c303563-93a8-4d18-b305-252d13b11103", 00:03:59.910 "assigned_rate_limits": { 00:03:59.910 "rw_ios_per_sec": 0, 00:03:59.910 "rw_mbytes_per_sec": 0, 00:03:59.910 "r_mbytes_per_sec": 0, 00:03:59.910 "w_mbytes_per_sec": 0 00:03:59.910 }, 00:03:59.910 "claimed": false, 00:03:59.910 "zoned": false, 00:03:59.910 "supported_io_types": { 00:03:59.910 "read": true, 00:03:59.910 "write": true, 00:03:59.910 "unmap": true, 00:03:59.910 "flush": true, 00:03:59.910 "reset": true, 00:03:59.910 "nvme_admin": false, 00:03:59.910 "nvme_io": false, 00:03:59.910 "nvme_io_md": false, 00:03:59.910 "write_zeroes": true, 00:03:59.910 "zcopy": true, 00:03:59.910 "get_zone_info": false, 00:03:59.910 "zone_management": false, 00:03:59.910 "zone_append": false, 00:03:59.910 "compare": false, 00:03:59.910 "compare_and_write": false, 00:03:59.910 "abort": true, 00:03:59.910 "seek_hole": false, 00:03:59.910 "seek_data": false, 00:03:59.910 "copy": true, 00:03:59.910 "nvme_iov_md": false 00:03:59.910 }, 00:03:59.910 "memory_domains": [ 00:03:59.910 { 00:03:59.910 "dma_device_id": "system", 00:03:59.910 "dma_device_type": 1 00:03:59.910 }, 00:03:59.910 { 00:03:59.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.910 "dma_device_type": 2 00:03:59.910 } 00:03:59.910 ], 00:03:59.910 "driver_specific": {} 00:03:59.910 } 00:03:59.910 ]' 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.910 [2024-10-16 06:46:59.326962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:59.910 [2024-10-16 06:46:59.327009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.910 [2024-10-16 06:46:59.327024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2137e60 00:03:59.910 [2024-10-16 06:46:59.327032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.910 [2024-10-16 06:46:59.328570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.910 [2024-10-16 06:46:59.328608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.910 Passthru0 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.910 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.910 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.910 { 00:03:59.910 "name": "Malloc0", 00:03:59.910 "aliases": [ 00:03:59.910 "5c303563-93a8-4d18-b305-252d13b11103" 00:03:59.910 ], 00:03:59.910 "product_name": "Malloc disk", 00:03:59.910 "block_size": 512, 00:03:59.910 "num_blocks": 16384, 00:03:59.910 "uuid": "5c303563-93a8-4d18-b305-252d13b11103", 00:03:59.910 "assigned_rate_limits": { 00:03:59.910 "rw_ios_per_sec": 0, 00:03:59.910 "rw_mbytes_per_sec": 0, 00:03:59.910 "r_mbytes_per_sec": 0, 00:03:59.910 "w_mbytes_per_sec": 0 00:03:59.911 }, 00:03:59.911 "claimed": true, 00:03:59.911 "claim_type": "exclusive_write", 00:03:59.911 "zoned": false, 00:03:59.911 "supported_io_types": { 00:03:59.911 "read": true, 00:03:59.911 "write": true, 00:03:59.911 "unmap": true, 00:03:59.911 "flush": true, 00:03:59.911 "reset": true, 00:03:59.911 "nvme_admin": false, 00:03:59.911 "nvme_io": false, 00:03:59.911 "nvme_io_md": false, 00:03:59.911 "write_zeroes": true, 00:03:59.911 "zcopy": true, 00:03:59.911 "get_zone_info": false, 00:03:59.911 "zone_management": false, 00:03:59.911 "zone_append": false, 00:03:59.911 "compare": false, 00:03:59.911 "compare_and_write": false, 00:03:59.911 "abort": true, 00:03:59.911 "seek_hole": false, 00:03:59.911 "seek_data": false, 00:03:59.911 "copy": true, 00:03:59.911 "nvme_iov_md": false 00:03:59.911 }, 00:03:59.911 "memory_domains": [ 00:03:59.911 { 00:03:59.911 "dma_device_id": "system", 00:03:59.911 "dma_device_type": 1 00:03:59.911 }, 00:03:59.911 { 00:03:59.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.911 "dma_device_type": 2 00:03:59.911 } 00:03:59.911 ], 00:03:59.911 "driver_specific": {} 00:03:59.911 }, 00:03:59.911 { 00:03:59.911 "name": "Passthru0", 00:03:59.911 "aliases": [ 00:03:59.911 "eac4c48c-3f63-58cb-ab0f-b15f0d96e973" 00:03:59.911 ], 00:03:59.911 "product_name": "passthru", 00:03:59.911 "block_size": 512, 00:03:59.911 "num_blocks": 16384, 00:03:59.911 "uuid": "eac4c48c-3f63-58cb-ab0f-b15f0d96e973", 00:03:59.911 "assigned_rate_limits": { 00:03:59.911 "rw_ios_per_sec": 0, 00:03:59.911 "rw_mbytes_per_sec": 0, 00:03:59.911 "r_mbytes_per_sec": 0, 00:03:59.911 "w_mbytes_per_sec": 0 00:03:59.911 }, 00:03:59.911 "claimed": false, 00:03:59.911 "zoned": false, 00:03:59.911 "supported_io_types": { 00:03:59.911 "read": true, 00:03:59.911 "write": true, 00:03:59.911 "unmap": true, 00:03:59.911 "flush": true, 00:03:59.911 "reset": true, 00:03:59.911 "nvme_admin": false, 00:03:59.911 "nvme_io": false, 00:03:59.911 "nvme_io_md": false, 00:03:59.911 "write_zeroes": true, 00:03:59.911 "zcopy": true, 00:03:59.911 "get_zone_info": false, 00:03:59.911 "zone_management": false, 00:03:59.911 "zone_append": false, 00:03:59.911 "compare": false, 00:03:59.911 "compare_and_write": false, 00:03:59.911 "abort": true, 00:03:59.911 "seek_hole": false, 00:03:59.911 "seek_data": false, 00:03:59.911 "copy": true, 00:03:59.911 "nvme_iov_md": false 00:03:59.911 }, 00:03:59.911 "memory_domains": [ 00:03:59.911 { 00:03:59.911 "dma_device_id": "system", 00:03:59.911 "dma_device_type": 1 00:03:59.911 }, 00:03:59.911 { 00:03:59.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.911 "dma_device_type": 2 00:03:59.911 } 00:03:59.911 ], 00:03:59.911 "driver_specific": { 00:03:59.911 "passthru": { 00:03:59.911 "name": "Passthru0", 00:03:59.911 "base_bdev_name": "Malloc0" 00:03:59.911 } 00:03:59.911 } 00:03:59.911 } 00:03:59.911 ]' 00:03:59.911 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.911 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.911 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.911 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.911 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.172 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.172 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:00.172 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.172 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.172 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.172 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.172 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.172 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.172 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.172 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.172 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.172 06:46:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.172 00:04:00.172 real 0m0.304s 00:04:00.172 user 0m0.188s 00:04:00.172 sys 0m0.044s 00:04:00.172 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.172 06:46:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.172 ************************************ 00:04:00.172 END TEST rpc_integrity 00:04:00.172 ************************************ 00:04:00.172 06:46:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:00.172 06:46:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.172 06:46:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.172 06:46:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.172 ************************************ 00:04:00.172 START TEST rpc_plugins 00:04:00.172 ************************************ 00:04:00.172 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:00.172 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:00.172 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.172 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.172 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.172 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:00.172 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:00.172 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.172 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.172 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.172 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:00.172 { 00:04:00.172 "name": "Malloc1", 00:04:00.172 "aliases": [ 00:04:00.172 "4ad9b5f8-b564-4491-87fd-3e0869edf1a4" 00:04:00.172 ], 00:04:00.172 "product_name": "Malloc disk", 00:04:00.172 "block_size": 4096, 00:04:00.172 "num_blocks": 256, 00:04:00.172 "uuid": "4ad9b5f8-b564-4491-87fd-3e0869edf1a4", 00:04:00.172 "assigned_rate_limits": { 00:04:00.172 "rw_ios_per_sec": 0, 00:04:00.172 "rw_mbytes_per_sec": 0, 00:04:00.172 "r_mbytes_per_sec": 0, 00:04:00.172 "w_mbytes_per_sec": 0 00:04:00.172 }, 00:04:00.172 "claimed": false, 00:04:00.172 "zoned": false, 00:04:00.172 "supported_io_types": { 00:04:00.172 "read": true, 00:04:00.172 "write": true, 00:04:00.172 "unmap": true, 00:04:00.172 "flush": true, 00:04:00.172 "reset": true, 00:04:00.172 "nvme_admin": false, 00:04:00.172 "nvme_io": false, 00:04:00.172 "nvme_io_md": false, 00:04:00.172 "write_zeroes": true, 00:04:00.172 "zcopy": true, 00:04:00.172 "get_zone_info": false, 00:04:00.172 "zone_management": false, 00:04:00.172 "zone_append": false, 00:04:00.172 "compare": false, 00:04:00.172 "compare_and_write": false, 00:04:00.172 "abort": true, 00:04:00.172 "seek_hole": false, 00:04:00.172 "seek_data": false, 00:04:00.172 "copy": true, 00:04:00.172 "nvme_iov_md": false 00:04:00.172 }, 00:04:00.172 "memory_domains": [ 00:04:00.172 { 00:04:00.172 "dma_device_id": "system", 00:04:00.172 "dma_device_type": 1 00:04:00.172 }, 00:04:00.172 { 00:04:00.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.172 "dma_device_type": 2 00:04:00.172 } 00:04:00.172 ], 00:04:00.172 "driver_specific": {} 00:04:00.172 } 00:04:00.172 ]' 00:04:00.172 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:00.172 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:00.172 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:00.172 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.173 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.173 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.173 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:00.173 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.173 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.173 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.433 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:00.433 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:00.433 06:46:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:00.433 00:04:00.433 real 0m0.152s 00:04:00.433 user 0m0.091s 00:04:00.433 sys 0m0.025s 00:04:00.433 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.433 06:46:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.433 ************************************ 00:04:00.433 END TEST rpc_plugins 00:04:00.433 ************************************ 00:04:00.433 06:46:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:00.433 06:46:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.433 06:46:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.433 06:46:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.433 ************************************ 00:04:00.433 START TEST rpc_trace_cmd_test 00:04:00.433 ************************************ 00:04:00.433 06:46:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:00.433 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:00.433 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:00.433 06:46:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.433 06:46:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.433 06:46:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.433 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:00.433 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2886213", 00:04:00.433 "tpoint_group_mask": "0x8", 00:04:00.433 "iscsi_conn": { 00:04:00.433 "mask": "0x2", 00:04:00.433 "tpoint_mask": "0x0" 00:04:00.433 }, 00:04:00.433 "scsi": { 00:04:00.433 "mask": "0x4", 00:04:00.433 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "bdev": { 00:04:00.434 "mask": "0x8", 00:04:00.434 "tpoint_mask": "0xffffffffffffffff" 00:04:00.434 }, 00:04:00.434 "nvmf_rdma": { 00:04:00.434 "mask": "0x10", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "nvmf_tcp": { 00:04:00.434 "mask": "0x20", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "ftl": { 00:04:00.434 "mask": "0x40", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "blobfs": { 00:04:00.434 "mask": "0x80", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "dsa": { 00:04:00.434 "mask": "0x200", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "thread": { 00:04:00.434 "mask": "0x400", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "nvme_pcie": { 00:04:00.434 "mask": "0x800", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "iaa": { 00:04:00.434 "mask": "0x1000", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "nvme_tcp": { 00:04:00.434 "mask": "0x2000", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "bdev_nvme": { 00:04:00.434 "mask": "0x4000", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "sock": { 00:04:00.434 "mask": "0x8000", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "blob": { 00:04:00.434 "mask": "0x10000", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "bdev_raid": { 00:04:00.434 "mask": "0x20000", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 }, 00:04:00.434 "scheduler": { 00:04:00.434 "mask": "0x40000", 00:04:00.434 "tpoint_mask": "0x0" 00:04:00.434 } 00:04:00.434 }' 00:04:00.434 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:00.434 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:00.434 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:00.434 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:00.434 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:00.696 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:00.696 06:46:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:00.696 06:47:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:00.696 06:47:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:00.696 06:47:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:00.696 00:04:00.696 real 0m0.253s 00:04:00.696 user 0m0.207s 00:04:00.696 sys 0m0.036s 00:04:00.696 06:47:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.696 06:47:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.696 ************************************ 00:04:00.696 END TEST rpc_trace_cmd_test 00:04:00.696 ************************************ 00:04:00.696 06:47:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:00.696 06:47:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:00.696 06:47:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:00.696 06:47:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.696 06:47:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.696 06:47:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.696 ************************************ 00:04:00.696 START TEST rpc_daemon_integrity 00:04:00.696 ************************************ 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.696 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.957 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.957 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:00.957 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.957 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.957 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.957 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.957 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.957 { 00:04:00.957 "name": "Malloc2", 00:04:00.957 "aliases": [ 00:04:00.957 "50826f90-6176-4936-bf76-bf4fd0db6ae0" 00:04:00.957 ], 00:04:00.957 "product_name": "Malloc disk", 00:04:00.957 "block_size": 512, 00:04:00.957 "num_blocks": 16384, 00:04:00.957 "uuid": "50826f90-6176-4936-bf76-bf4fd0db6ae0", 00:04:00.957 "assigned_rate_limits": { 00:04:00.957 "rw_ios_per_sec": 0, 00:04:00.957 "rw_mbytes_per_sec": 0, 00:04:00.957 "r_mbytes_per_sec": 0, 00:04:00.957 "w_mbytes_per_sec": 0 00:04:00.957 }, 00:04:00.957 "claimed": false, 00:04:00.957 "zoned": false, 00:04:00.957 "supported_io_types": { 00:04:00.957 "read": true, 00:04:00.957 "write": true, 00:04:00.957 "unmap": true, 00:04:00.957 "flush": true, 00:04:00.957 "reset": true, 00:04:00.957 "nvme_admin": false, 00:04:00.957 "nvme_io": false, 00:04:00.957 "nvme_io_md": false, 00:04:00.957 "write_zeroes": true, 00:04:00.957 "zcopy": true, 00:04:00.957 "get_zone_info": false, 00:04:00.957 "zone_management": false, 00:04:00.957 "zone_append": false, 00:04:00.958 "compare": false, 00:04:00.958 "compare_and_write": false, 00:04:00.958 "abort": true, 00:04:00.958 "seek_hole": false, 00:04:00.958 "seek_data": false, 00:04:00.958 "copy": true, 00:04:00.958 "nvme_iov_md": false 00:04:00.958 }, 00:04:00.958 "memory_domains": [ 00:04:00.958 { 00:04:00.958 "dma_device_id": "system", 00:04:00.958 "dma_device_type": 1 00:04:00.958 }, 00:04:00.958 { 00:04:00.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.958 "dma_device_type": 2 00:04:00.958 } 00:04:00.958 ], 00:04:00.958 "driver_specific": {} 00:04:00.958 } 00:04:00.958 ]' 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.958 [2024-10-16 06:47:00.273601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:00.958 [2024-10-16 06:47:00.273651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.958 [2024-10-16 06:47:00.273671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2137a90 00:04:00.958 [2024-10-16 06:47:00.273681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.958 [2024-10-16 06:47:00.275245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.958 [2024-10-16 06:47:00.275288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.958 Passthru0 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:00.958 { 00:04:00.958 "name": "Malloc2", 00:04:00.958 "aliases": [ 00:04:00.958 "50826f90-6176-4936-bf76-bf4fd0db6ae0" 00:04:00.958 ], 00:04:00.958 "product_name": "Malloc disk", 00:04:00.958 "block_size": 512, 00:04:00.958 "num_blocks": 16384, 00:04:00.958 "uuid": "50826f90-6176-4936-bf76-bf4fd0db6ae0", 00:04:00.958 "assigned_rate_limits": { 00:04:00.958 "rw_ios_per_sec": 0, 00:04:00.958 "rw_mbytes_per_sec": 0, 00:04:00.958 "r_mbytes_per_sec": 0, 00:04:00.958 "w_mbytes_per_sec": 0 00:04:00.958 }, 00:04:00.958 "claimed": true, 00:04:00.958 "claim_type": "exclusive_write", 00:04:00.958 "zoned": false, 00:04:00.958 "supported_io_types": { 00:04:00.958 "read": true, 00:04:00.958 "write": true, 00:04:00.958 "unmap": true, 00:04:00.958 "flush": true, 00:04:00.958 "reset": true, 00:04:00.958 "nvme_admin": false, 00:04:00.958 "nvme_io": false, 00:04:00.958 "nvme_io_md": false, 00:04:00.958 "write_zeroes": true, 00:04:00.958 "zcopy": true, 00:04:00.958 "get_zone_info": false, 00:04:00.958 "zone_management": false, 00:04:00.958 "zone_append": false, 00:04:00.958 "compare": false, 00:04:00.958 "compare_and_write": false, 00:04:00.958 "abort": true, 00:04:00.958 "seek_hole": false, 00:04:00.958 "seek_data": false, 00:04:00.958 "copy": true, 00:04:00.958 "nvme_iov_md": false 00:04:00.958 }, 00:04:00.958 "memory_domains": [ 00:04:00.958 { 00:04:00.958 "dma_device_id": "system", 00:04:00.958 "dma_device_type": 1 00:04:00.958 }, 00:04:00.958 { 00:04:00.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.958 "dma_device_type": 2 00:04:00.958 } 00:04:00.958 ], 00:04:00.958 "driver_specific": {} 00:04:00.958 }, 00:04:00.958 { 00:04:00.958 "name": "Passthru0", 00:04:00.958 "aliases": [ 00:04:00.958 "5d9c5e4d-1414-5c54-9fed-45e6fa86c6d8" 00:04:00.958 ], 00:04:00.958 "product_name": "passthru", 00:04:00.958 "block_size": 512, 00:04:00.958 "num_blocks": 16384, 00:04:00.958 "uuid": "5d9c5e4d-1414-5c54-9fed-45e6fa86c6d8", 00:04:00.958 "assigned_rate_limits": { 00:04:00.958 "rw_ios_per_sec": 0, 00:04:00.958 "rw_mbytes_per_sec": 0, 00:04:00.958 "r_mbytes_per_sec": 0, 00:04:00.958 "w_mbytes_per_sec": 0 00:04:00.958 }, 00:04:00.958 "claimed": false, 00:04:00.958 "zoned": false, 00:04:00.958 "supported_io_types": { 00:04:00.958 "read": true, 00:04:00.958 "write": true, 00:04:00.958 "unmap": true, 00:04:00.958 "flush": true, 00:04:00.958 "reset": true, 00:04:00.958 "nvme_admin": false, 00:04:00.958 "nvme_io": false, 00:04:00.958 "nvme_io_md": false, 00:04:00.958 "write_zeroes": true, 00:04:00.958 "zcopy": true, 00:04:00.958 "get_zone_info": false, 00:04:00.958 "zone_management": false, 00:04:00.958 "zone_append": false, 00:04:00.958 "compare": false, 00:04:00.958 "compare_and_write": false, 00:04:00.958 "abort": true, 00:04:00.958 "seek_hole": false, 00:04:00.958 "seek_data": false, 00:04:00.958 "copy": true, 00:04:00.958 "nvme_iov_md": false 00:04:00.958 }, 00:04:00.958 "memory_domains": [ 00:04:00.958 { 00:04:00.958 "dma_device_id": "system", 00:04:00.958 "dma_device_type": 1 00:04:00.958 }, 00:04:00.958 { 00:04:00.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.958 "dma_device_type": 2 00:04:00.958 } 00:04:00.958 ], 00:04:00.958 "driver_specific": { 00:04:00.958 "passthru": { 00:04:00.958 "name": "Passthru0", 00:04:00.958 "base_bdev_name": "Malloc2" 00:04:00.958 } 00:04:00.958 } 00:04:00.958 } 00:04:00.958 ]' 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.958 00:04:00.958 real 0m0.299s 00:04:00.958 user 0m0.189s 00:04:00.958 sys 0m0.045s 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.958 06:47:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.958 ************************************ 00:04:00.958 END TEST rpc_daemon_integrity 00:04:00.958 ************************************ 00:04:01.218 06:47:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:01.218 06:47:00 rpc -- rpc/rpc.sh@84 -- # killprocess 2886213 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@950 -- # '[' -z 2886213 ']' 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@954 -- # kill -0 2886213 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@955 -- # uname 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2886213 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2886213' 00:04:01.218 killing process with pid 2886213 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@969 -- # kill 2886213 00:04:01.218 06:47:00 rpc -- common/autotest_common.sh@974 -- # wait 2886213 00:04:01.479 00:04:01.479 real 0m2.718s 00:04:01.479 user 0m3.464s 00:04:01.479 sys 0m0.844s 00:04:01.479 06:47:00 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.479 06:47:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.479 ************************************ 00:04:01.479 END TEST rpc 00:04:01.479 ************************************ 00:04:01.479 06:47:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.479 06:47:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.479 06:47:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.479 06:47:00 -- common/autotest_common.sh@10 -- # set +x 00:04:01.479 ************************************ 00:04:01.479 START TEST skip_rpc 00:04:01.479 ************************************ 00:04:01.479 06:47:00 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.479 * Looking for test storage... 00:04:01.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.479 06:47:00 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:01.479 06:47:00 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:01.479 06:47:00 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:01.739 06:47:01 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.739 06:47:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:01.739 06:47:01 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.739 06:47:01 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.739 --rc genhtml_branch_coverage=1 00:04:01.739 --rc genhtml_function_coverage=1 00:04:01.739 --rc genhtml_legend=1 00:04:01.739 --rc geninfo_all_blocks=1 00:04:01.739 --rc geninfo_unexecuted_blocks=1 00:04:01.739 00:04:01.739 ' 00:04:01.739 06:47:01 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.739 --rc genhtml_branch_coverage=1 00:04:01.739 --rc genhtml_function_coverage=1 00:04:01.739 --rc genhtml_legend=1 00:04:01.739 --rc geninfo_all_blocks=1 00:04:01.739 --rc geninfo_unexecuted_blocks=1 00:04:01.739 00:04:01.739 ' 00:04:01.739 06:47:01 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.739 --rc genhtml_branch_coverage=1 00:04:01.739 --rc genhtml_function_coverage=1 00:04:01.739 --rc genhtml_legend=1 00:04:01.739 --rc geninfo_all_blocks=1 00:04:01.739 --rc geninfo_unexecuted_blocks=1 00:04:01.739 00:04:01.739 ' 00:04:01.739 06:47:01 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.739 --rc genhtml_branch_coverage=1 00:04:01.739 --rc genhtml_function_coverage=1 00:04:01.739 --rc genhtml_legend=1 00:04:01.739 --rc geninfo_all_blocks=1 00:04:01.739 --rc geninfo_unexecuted_blocks=1 00:04:01.739 00:04:01.739 ' 00:04:01.739 06:47:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.739 06:47:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.739 06:47:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:01.739 06:47:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.739 06:47:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.739 06:47:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.739 ************************************ 00:04:01.739 START TEST skip_rpc 00:04:01.739 ************************************ 00:04:01.739 06:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:01.739 06:47:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2887058 00:04:01.739 06:47:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.739 06:47:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:01.739 06:47:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:01.739 [2024-10-16 06:47:01.164176] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:01.739 [2024-10-16 06:47:01.164242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887058 ] 00:04:01.999 [2024-10-16 06:47:01.245479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.999 [2024-10-16 06:47:01.298298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2887058 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2887058 ']' 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2887058 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2887058 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2887058' 00:04:07.286 killing process with pid 2887058 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2887058 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2887058 00:04:07.286 00:04:07.286 real 0m5.262s 00:04:07.286 user 0m5.018s 00:04:07.286 sys 0m0.292s 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.286 06:47:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.286 ************************************ 00:04:07.286 END TEST skip_rpc 00:04:07.286 ************************************ 00:04:07.286 06:47:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:07.286 06:47:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.286 06:47:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.286 06:47:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.286 ************************************ 00:04:07.286 START TEST skip_rpc_with_json 00:04:07.286 ************************************ 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2888100 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2888100 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2888100 ']' 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:07.286 06:47:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.287 06:47:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:07.287 06:47:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.287 [2024-10-16 06:47:06.498467] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:07.287 [2024-10-16 06:47:06.498528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888100 ] 00:04:07.287 [2024-10-16 06:47:06.577204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.287 [2024-10-16 06:47:06.613129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.857 [2024-10-16 06:47:07.301897] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:07.857 request: 00:04:07.857 { 00:04:07.857 "trtype": "tcp", 00:04:07.857 "method": "nvmf_get_transports", 00:04:07.857 "req_id": 1 00:04:07.857 } 00:04:07.857 Got JSON-RPC error response 00:04:07.857 response: 00:04:07.857 { 00:04:07.857 "code": -19, 00:04:07.857 "message": "No such device" 00:04:07.857 } 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.857 [2024-10-16 06:47:07.314000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.857 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.117 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.117 06:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.117 { 00:04:08.117 "subsystems": [ 00:04:08.117 { 00:04:08.117 "subsystem": "fsdev", 00:04:08.117 "config": [ 00:04:08.117 { 00:04:08.117 "method": "fsdev_set_opts", 00:04:08.117 "params": { 00:04:08.117 "fsdev_io_pool_size": 65535, 00:04:08.117 "fsdev_io_cache_size": 256 00:04:08.117 } 00:04:08.117 } 00:04:08.117 ] 00:04:08.117 }, 00:04:08.117 { 00:04:08.117 "subsystem": "vfio_user_target", 00:04:08.117 "config": null 00:04:08.117 }, 00:04:08.117 { 00:04:08.117 "subsystem": "keyring", 00:04:08.117 "config": [] 00:04:08.117 }, 00:04:08.117 { 00:04:08.117 "subsystem": "iobuf", 00:04:08.117 "config": [ 00:04:08.117 { 00:04:08.117 "method": "iobuf_set_options", 00:04:08.117 "params": { 00:04:08.117 "small_pool_count": 8192, 00:04:08.117 "large_pool_count": 1024, 00:04:08.117 "small_bufsize": 8192, 00:04:08.117 "large_bufsize": 135168 00:04:08.117 } 00:04:08.117 } 00:04:08.117 ] 00:04:08.117 }, 00:04:08.117 { 00:04:08.117 "subsystem": "sock", 00:04:08.117 "config": [ 00:04:08.117 { 00:04:08.117 "method": "sock_set_default_impl", 00:04:08.117 "params": { 00:04:08.117 "impl_name": "posix" 00:04:08.117 } 00:04:08.117 }, 00:04:08.117 { 00:04:08.117 "method": "sock_impl_set_options", 00:04:08.117 "params": { 00:04:08.117 "impl_name": "ssl", 00:04:08.117 "recv_buf_size": 4096, 00:04:08.117 "send_buf_size": 4096, 00:04:08.117 "enable_recv_pipe": true, 00:04:08.117 "enable_quickack": false, 00:04:08.117 "enable_placement_id": 0, 00:04:08.117 "enable_zerocopy_send_server": true, 00:04:08.117 "enable_zerocopy_send_client": false, 00:04:08.117 "zerocopy_threshold": 0, 00:04:08.117 "tls_version": 0, 00:04:08.117 "enable_ktls": false 00:04:08.117 } 00:04:08.117 }, 00:04:08.117 { 00:04:08.117 "method": "sock_impl_set_options", 00:04:08.117 "params": { 00:04:08.117 "impl_name": "posix", 00:04:08.117 "recv_buf_size": 2097152, 00:04:08.117 "send_buf_size": 2097152, 00:04:08.117 "enable_recv_pipe": true, 00:04:08.117 "enable_quickack": false, 00:04:08.117 "enable_placement_id": 0, 00:04:08.117 "enable_zerocopy_send_server": true, 00:04:08.117 "enable_zerocopy_send_client": false, 00:04:08.117 "zerocopy_threshold": 0, 00:04:08.117 "tls_version": 0, 00:04:08.118 "enable_ktls": false 00:04:08.118 } 00:04:08.118 } 00:04:08.118 ] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "vmd", 00:04:08.118 "config": [] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "accel", 00:04:08.118 "config": [ 00:04:08.118 { 00:04:08.118 "method": "accel_set_options", 00:04:08.118 "params": { 00:04:08.118 "small_cache_size": 128, 00:04:08.118 "large_cache_size": 16, 00:04:08.118 "task_count": 2048, 00:04:08.118 "sequence_count": 2048, 00:04:08.118 "buf_count": 2048 00:04:08.118 } 00:04:08.118 } 00:04:08.118 ] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "bdev", 00:04:08.118 "config": [ 00:04:08.118 { 00:04:08.118 "method": "bdev_set_options", 00:04:08.118 "params": { 00:04:08.118 "bdev_io_pool_size": 65535, 00:04:08.118 "bdev_io_cache_size": 256, 00:04:08.118 "bdev_auto_examine": true, 00:04:08.118 "iobuf_small_cache_size": 128, 00:04:08.118 "iobuf_large_cache_size": 16 00:04:08.118 } 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "method": "bdev_raid_set_options", 00:04:08.118 "params": { 00:04:08.118 "process_window_size_kb": 1024, 00:04:08.118 "process_max_bandwidth_mb_sec": 0 00:04:08.118 } 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "method": "bdev_iscsi_set_options", 00:04:08.118 "params": { 00:04:08.118 "timeout_sec": 30 00:04:08.118 } 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "method": "bdev_nvme_set_options", 00:04:08.118 "params": { 00:04:08.118 "action_on_timeout": "none", 00:04:08.118 "timeout_us": 0, 00:04:08.118 "timeout_admin_us": 0, 00:04:08.118 "keep_alive_timeout_ms": 10000, 00:04:08.118 "arbitration_burst": 0, 00:04:08.118 "low_priority_weight": 0, 00:04:08.118 "medium_priority_weight": 0, 00:04:08.118 "high_priority_weight": 0, 00:04:08.118 "nvme_adminq_poll_period_us": 10000, 00:04:08.118 "nvme_ioq_poll_period_us": 0, 00:04:08.118 "io_queue_requests": 0, 00:04:08.118 "delay_cmd_submit": true, 00:04:08.118 "transport_retry_count": 4, 00:04:08.118 "bdev_retry_count": 3, 00:04:08.118 "transport_ack_timeout": 0, 00:04:08.118 "ctrlr_loss_timeout_sec": 0, 00:04:08.118 "reconnect_delay_sec": 0, 00:04:08.118 "fast_io_fail_timeout_sec": 0, 00:04:08.118 "disable_auto_failback": false, 00:04:08.118 "generate_uuids": false, 00:04:08.118 "transport_tos": 0, 00:04:08.118 "nvme_error_stat": false, 00:04:08.118 "rdma_srq_size": 0, 00:04:08.118 "io_path_stat": false, 00:04:08.118 "allow_accel_sequence": false, 00:04:08.118 "rdma_max_cq_size": 0, 00:04:08.118 "rdma_cm_event_timeout_ms": 0, 00:04:08.118 "dhchap_digests": [ 00:04:08.118 "sha256", 00:04:08.118 "sha384", 00:04:08.118 "sha512" 00:04:08.118 ], 00:04:08.118 "dhchap_dhgroups": [ 00:04:08.118 "null", 00:04:08.118 "ffdhe2048", 00:04:08.118 "ffdhe3072", 00:04:08.118 "ffdhe4096", 00:04:08.118 "ffdhe6144", 00:04:08.118 "ffdhe8192" 00:04:08.118 ] 00:04:08.118 } 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "method": "bdev_nvme_set_hotplug", 00:04:08.118 "params": { 00:04:08.118 "period_us": 100000, 00:04:08.118 "enable": false 00:04:08.118 } 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "method": "bdev_wait_for_examine" 00:04:08.118 } 00:04:08.118 ] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "scsi", 00:04:08.118 "config": null 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "scheduler", 00:04:08.118 "config": [ 00:04:08.118 { 00:04:08.118 "method": "framework_set_scheduler", 00:04:08.118 "params": { 00:04:08.118 "name": "static" 00:04:08.118 } 00:04:08.118 } 00:04:08.118 ] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "vhost_scsi", 00:04:08.118 "config": [] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "vhost_blk", 00:04:08.118 "config": [] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "ublk", 00:04:08.118 "config": [] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "nbd", 00:04:08.118 "config": [] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "nvmf", 00:04:08.118 "config": [ 00:04:08.118 { 00:04:08.118 "method": "nvmf_set_config", 00:04:08.118 "params": { 00:04:08.118 "discovery_filter": "match_any", 00:04:08.118 "admin_cmd_passthru": { 00:04:08.118 "identify_ctrlr": false 00:04:08.118 }, 00:04:08.118 "dhchap_digests": [ 00:04:08.118 "sha256", 00:04:08.118 "sha384", 00:04:08.118 "sha512" 00:04:08.118 ], 00:04:08.118 "dhchap_dhgroups": [ 00:04:08.118 "null", 00:04:08.118 "ffdhe2048", 00:04:08.118 "ffdhe3072", 00:04:08.118 "ffdhe4096", 00:04:08.118 "ffdhe6144", 00:04:08.118 "ffdhe8192" 00:04:08.118 ] 00:04:08.118 } 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "method": "nvmf_set_max_subsystems", 00:04:08.118 "params": { 00:04:08.118 "max_subsystems": 1024 00:04:08.118 } 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "method": "nvmf_set_crdt", 00:04:08.118 "params": { 00:04:08.118 "crdt1": 0, 00:04:08.118 "crdt2": 0, 00:04:08.118 "crdt3": 0 00:04:08.118 } 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "method": "nvmf_create_transport", 00:04:08.118 "params": { 00:04:08.118 "trtype": "TCP", 00:04:08.118 "max_queue_depth": 128, 00:04:08.118 "max_io_qpairs_per_ctrlr": 127, 00:04:08.118 "in_capsule_data_size": 4096, 00:04:08.118 "max_io_size": 131072, 00:04:08.118 "io_unit_size": 131072, 00:04:08.118 "max_aq_depth": 128, 00:04:08.118 "num_shared_buffers": 511, 00:04:08.118 "buf_cache_size": 4294967295, 00:04:08.118 "dif_insert_or_strip": false, 00:04:08.118 "zcopy": false, 00:04:08.118 "c2h_success": true, 00:04:08.118 "sock_priority": 0, 00:04:08.118 "abort_timeout_sec": 1, 00:04:08.118 "ack_timeout": 0, 00:04:08.118 "data_wr_pool_size": 0 00:04:08.118 } 00:04:08.118 } 00:04:08.118 ] 00:04:08.118 }, 00:04:08.118 { 00:04:08.118 "subsystem": "iscsi", 00:04:08.118 "config": [ 00:04:08.118 { 00:04:08.118 "method": "iscsi_set_options", 00:04:08.118 "params": { 00:04:08.118 "node_base": "iqn.2016-06.io.spdk", 00:04:08.118 "max_sessions": 128, 00:04:08.118 "max_connections_per_session": 2, 00:04:08.118 "max_queue_depth": 64, 00:04:08.118 "default_time2wait": 2, 00:04:08.118 "default_time2retain": 20, 00:04:08.118 "first_burst_length": 8192, 00:04:08.118 "immediate_data": true, 00:04:08.118 "allow_duplicated_isid": false, 00:04:08.118 "error_recovery_level": 0, 00:04:08.118 "nop_timeout": 60, 00:04:08.118 "nop_in_interval": 30, 00:04:08.118 "disable_chap": false, 00:04:08.118 "require_chap": false, 00:04:08.118 "mutual_chap": false, 00:04:08.118 "chap_group": 0, 00:04:08.118 "max_large_datain_per_connection": 64, 00:04:08.118 "max_r2t_per_connection": 4, 00:04:08.118 "pdu_pool_size": 36864, 00:04:08.118 "immediate_data_pool_size": 16384, 00:04:08.118 "data_out_pool_size": 2048 00:04:08.118 } 00:04:08.118 } 00:04:08.118 ] 00:04:08.118 } 00:04:08.118 ] 00:04:08.118 } 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2888100 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2888100 ']' 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2888100 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2888100 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2888100' 00:04:08.118 killing process with pid 2888100 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2888100 00:04:08.118 06:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2888100 00:04:08.380 06:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2888442 00:04:08.380 06:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:08.380 06:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2888442 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2888442 ']' 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2888442 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2888442 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2888442' 00:04:13.664 killing process with pid 2888442 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2888442 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2888442 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:13.664 06:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:13.664 00:04:13.664 real 0m6.566s 00:04:13.664 user 0m6.478s 00:04:13.664 sys 0m0.571s 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.664 ************************************ 00:04:13.664 END TEST skip_rpc_with_json 00:04:13.664 ************************************ 00:04:13.664 06:47:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:13.664 06:47:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.664 06:47:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.664 06:47:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.664 ************************************ 00:04:13.664 START TEST skip_rpc_with_delay 00:04:13.664 ************************************ 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.664 [2024-10-16 06:47:13.150431] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:13.664 00:04:13.664 real 0m0.084s 00:04:13.664 user 0m0.054s 00:04:13.664 sys 0m0.030s 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.664 06:47:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:13.664 ************************************ 00:04:13.664 END TEST skip_rpc_with_delay 00:04:13.664 ************************************ 00:04:13.924 06:47:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:13.924 06:47:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:13.924 06:47:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:13.924 06:47:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.924 06:47:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.924 06:47:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.924 ************************************ 00:04:13.924 START TEST exit_on_failed_rpc_init 00:04:13.924 ************************************ 00:04:13.924 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:13.924 06:47:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2889507 00:04:13.924 06:47:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2889507 00:04:13.924 06:47:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.924 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2889507 ']' 00:04:13.924 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.924 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:13.924 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.924 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:13.925 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.925 [2024-10-16 06:47:13.305431] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:13.925 [2024-10-16 06:47:13.305479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889507 ] 00:04:13.925 [2024-10-16 06:47:13.350399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.925 [2024-10-16 06:47:13.383064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:14.185 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.185 [2024-10-16 06:47:13.629045] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:14.185 [2024-10-16 06:47:13.629099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889515 ] 00:04:14.445 [2024-10-16 06:47:13.703674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.445 [2024-10-16 06:47:13.739401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.445 [2024-10-16 06:47:13.739448] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:14.445 [2024-10-16 06:47:13.739458] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:14.445 [2024-10-16 06:47:13.739465] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2889507 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2889507 ']' 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2889507 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2889507 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.445 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2889507' 00:04:14.446 killing process with pid 2889507 00:04:14.446 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2889507 00:04:14.446 06:47:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2889507 00:04:14.706 00:04:14.706 real 0m0.765s 00:04:14.706 user 0m0.883s 00:04:14.706 sys 0m0.310s 00:04:14.706 06:47:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.706 06:47:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.706 ************************************ 00:04:14.706 END TEST exit_on_failed_rpc_init 00:04:14.706 ************************************ 00:04:14.706 06:47:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.706 00:04:14.707 real 0m13.201s 00:04:14.707 user 0m12.647s 00:04:14.707 sys 0m1.535s 00:04:14.707 06:47:14 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.707 06:47:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.707 ************************************ 00:04:14.707 END TEST skip_rpc 00:04:14.707 ************************************ 00:04:14.707 06:47:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:14.707 06:47:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.707 06:47:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.707 06:47:14 -- common/autotest_common.sh@10 -- # set +x 00:04:14.707 ************************************ 00:04:14.707 START TEST rpc_client 00:04:14.707 ************************************ 00:04:14.707 06:47:14 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:14.968 * Looking for test storage... 00:04:14.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.968 06:47:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.968 --rc genhtml_branch_coverage=1 00:04:14.968 --rc genhtml_function_coverage=1 00:04:14.968 --rc genhtml_legend=1 00:04:14.968 --rc geninfo_all_blocks=1 00:04:14.968 --rc geninfo_unexecuted_blocks=1 00:04:14.968 00:04:14.968 ' 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.968 --rc genhtml_branch_coverage=1 00:04:14.968 --rc genhtml_function_coverage=1 00:04:14.968 --rc genhtml_legend=1 00:04:14.968 --rc geninfo_all_blocks=1 00:04:14.968 --rc geninfo_unexecuted_blocks=1 00:04:14.968 00:04:14.968 ' 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.968 --rc genhtml_branch_coverage=1 00:04:14.968 --rc genhtml_function_coverage=1 00:04:14.968 --rc genhtml_legend=1 00:04:14.968 --rc geninfo_all_blocks=1 00:04:14.968 --rc geninfo_unexecuted_blocks=1 00:04:14.968 00:04:14.968 ' 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.968 --rc genhtml_branch_coverage=1 00:04:14.968 --rc genhtml_function_coverage=1 00:04:14.968 --rc genhtml_legend=1 00:04:14.968 --rc geninfo_all_blocks=1 00:04:14.968 --rc geninfo_unexecuted_blocks=1 00:04:14.968 00:04:14.968 ' 00:04:14.968 06:47:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:14.968 OK 00:04:14.968 06:47:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:14.968 00:04:14.968 real 0m0.228s 00:04:14.968 user 0m0.136s 00:04:14.968 sys 0m0.107s 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.968 06:47:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:14.968 ************************************ 00:04:14.968 END TEST rpc_client 00:04:14.968 ************************************ 00:04:14.968 06:47:14 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:14.968 06:47:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.968 06:47:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.968 06:47:14 -- common/autotest_common.sh@10 -- # set +x 00:04:14.968 ************************************ 00:04:14.968 START TEST json_config 00:04:14.968 ************************************ 00:04:14.968 06:47:14 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:15.229 06:47:14 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:15.230 06:47:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.230 06:47:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.230 06:47:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.230 06:47:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.230 06:47:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.230 06:47:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.230 06:47:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.230 06:47:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.230 06:47:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.230 06:47:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.230 06:47:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.230 06:47:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:15.230 06:47:14 json_config -- scripts/common.sh@345 -- # : 1 00:04:15.230 06:47:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.230 06:47:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.230 06:47:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:15.230 06:47:14 json_config -- scripts/common.sh@353 -- # local d=1 00:04:15.230 06:47:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.230 06:47:14 json_config -- scripts/common.sh@355 -- # echo 1 00:04:15.230 06:47:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.230 06:47:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:15.230 06:47:14 json_config -- scripts/common.sh@353 -- # local d=2 00:04:15.230 06:47:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.230 06:47:14 json_config -- scripts/common.sh@355 -- # echo 2 00:04:15.230 06:47:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.230 06:47:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.230 06:47:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.230 06:47:14 json_config -- scripts/common.sh@368 -- # return 0 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:15.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.230 --rc genhtml_branch_coverage=1 00:04:15.230 --rc genhtml_function_coverage=1 00:04:15.230 --rc genhtml_legend=1 00:04:15.230 --rc geninfo_all_blocks=1 00:04:15.230 --rc geninfo_unexecuted_blocks=1 00:04:15.230 00:04:15.230 ' 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:15.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.230 --rc genhtml_branch_coverage=1 00:04:15.230 --rc genhtml_function_coverage=1 00:04:15.230 --rc genhtml_legend=1 00:04:15.230 --rc geninfo_all_blocks=1 00:04:15.230 --rc geninfo_unexecuted_blocks=1 00:04:15.230 00:04:15.230 ' 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:15.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.230 --rc genhtml_branch_coverage=1 00:04:15.230 --rc genhtml_function_coverage=1 00:04:15.230 --rc genhtml_legend=1 00:04:15.230 --rc geninfo_all_blocks=1 00:04:15.230 --rc geninfo_unexecuted_blocks=1 00:04:15.230 00:04:15.230 ' 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:15.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.230 --rc genhtml_branch_coverage=1 00:04:15.230 --rc genhtml_function_coverage=1 00:04:15.230 --rc genhtml_legend=1 00:04:15.230 --rc geninfo_all_blocks=1 00:04:15.230 --rc geninfo_unexecuted_blocks=1 00:04:15.230 00:04:15.230 ' 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:15.230 06:47:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:15.230 06:47:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:15.230 06:47:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:15.230 06:47:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:15.230 06:47:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.230 06:47:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.230 06:47:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.230 06:47:14 json_config -- paths/export.sh@5 -- # export PATH 00:04:15.230 06:47:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@51 -- # : 0 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:15.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:15.230 06:47:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:15.230 INFO: JSON configuration test init 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.230 06:47:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.230 06:47:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.231 06:47:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:15.231 06:47:14 json_config -- json_config/common.sh@9 -- # local app=target 00:04:15.231 06:47:14 json_config -- json_config/common.sh@10 -- # shift 00:04:15.231 06:47:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:15.231 06:47:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:15.231 06:47:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:15.231 06:47:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.231 06:47:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.231 06:47:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2889972 00:04:15.231 06:47:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:15.231 Waiting for target to run... 00:04:15.231 06:47:14 json_config -- json_config/common.sh@25 -- # waitforlisten 2889972 /var/tmp/spdk_tgt.sock 00:04:15.231 06:47:14 json_config -- common/autotest_common.sh@831 -- # '[' -z 2889972 ']' 00:04:15.231 06:47:14 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:15.231 06:47:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:15.231 06:47:14 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:15.231 06:47:14 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:15.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:15.231 06:47:14 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:15.231 06:47:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.231 [2024-10-16 06:47:14.720032] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:15.231 [2024-10-16 06:47:14.720111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889972 ] 00:04:15.802 [2024-10-16 06:47:15.003157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.802 [2024-10-16 06:47:15.031318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.062 06:47:15 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.062 06:47:15 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:16.062 06:47:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:16.062 00:04:16.062 06:47:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:16.062 06:47:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:16.062 06:47:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.062 06:47:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.062 06:47:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:16.062 06:47:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:16.062 06:47:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:16.062 06:47:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.062 06:47:15 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:16.062 06:47:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:16.062 06:47:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:16.634 06:47:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:16.634 06:47:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:16.634 06:47:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.634 06:47:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.634 06:47:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:16.634 06:47:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:16.634 06:47:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:16.634 06:47:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:16.634 06:47:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:16.634 06:47:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:16.634 06:47:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:16.634 06:47:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@54 -- # sort 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:16.896 06:47:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:16.896 06:47:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:16.896 06:47:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.896 06:47:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:16.896 06:47:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:16.896 06:47:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:17.170 MallocForNvmf0 00:04:17.170 06:47:16 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.170 06:47:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.472 MallocForNvmf1 00:04:17.472 06:47:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.472 06:47:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.472 [2024-10-16 06:47:16.867379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.472 06:47:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:17.472 06:47:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:17.772 06:47:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:17.772 06:47:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:17.772 06:47:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:17.772 06:47:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.033 06:47:17 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.033 06:47:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.293 [2024-10-16 06:47:17.573555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:18.293 06:47:17 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:18.293 06:47:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.293 06:47:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.293 06:47:17 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:18.293 06:47:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.293 06:47:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.293 06:47:17 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:18.293 06:47:17 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.293 06:47:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.553 MallocBdevForConfigChangeCheck 00:04:18.553 06:47:17 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:18.553 06:47:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.553 06:47:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.553 06:47:17 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:18.553 06:47:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.825 06:47:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:18.825 INFO: shutting down applications... 00:04:18.825 06:47:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:18.825 06:47:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:18.825 06:47:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:18.825 06:47:18 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:19.401 Calling clear_iscsi_subsystem 00:04:19.401 Calling clear_nvmf_subsystem 00:04:19.401 Calling clear_nbd_subsystem 00:04:19.401 Calling clear_ublk_subsystem 00:04:19.401 Calling clear_vhost_blk_subsystem 00:04:19.401 Calling clear_vhost_scsi_subsystem 00:04:19.401 Calling clear_bdev_subsystem 00:04:19.401 06:47:18 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:19.401 06:47:18 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:19.401 06:47:18 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:19.401 06:47:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.401 06:47:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:19.401 06:47:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:19.662 06:47:18 json_config -- json_config/json_config.sh@352 -- # break 00:04:19.662 06:47:18 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:19.662 06:47:18 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:19.662 06:47:18 json_config -- json_config/common.sh@31 -- # local app=target 00:04:19.662 06:47:18 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.662 06:47:18 json_config -- json_config/common.sh@35 -- # [[ -n 2889972 ]] 00:04:19.662 06:47:18 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2889972 00:04:19.662 06:47:18 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.662 06:47:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.662 06:47:18 json_config -- json_config/common.sh@41 -- # kill -0 2889972 00:04:19.662 06:47:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.233 06:47:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.233 06:47:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.233 06:47:19 json_config -- json_config/common.sh@41 -- # kill -0 2889972 00:04:20.233 06:47:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.233 06:47:19 json_config -- json_config/common.sh@43 -- # break 00:04:20.233 06:47:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.233 06:47:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.233 SPDK target shutdown done 00:04:20.233 06:47:19 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:20.233 INFO: relaunching applications... 00:04:20.233 06:47:19 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.233 06:47:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:20.233 06:47:19 json_config -- json_config/common.sh@10 -- # shift 00:04:20.233 06:47:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.233 06:47:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.233 06:47:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.233 06:47:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.233 06:47:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.233 06:47:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2891109 00:04:20.233 06:47:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.233 Waiting for target to run... 00:04:20.233 06:47:19 json_config -- json_config/common.sh@25 -- # waitforlisten 2891109 /var/tmp/spdk_tgt.sock 00:04:20.233 06:47:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.233 06:47:19 json_config -- common/autotest_common.sh@831 -- # '[' -z 2891109 ']' 00:04:20.233 06:47:19 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.233 06:47:19 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.233 06:47:19 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.233 06:47:19 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.233 06:47:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.233 [2024-10-16 06:47:19.546038] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:20.233 [2024-10-16 06:47:19.546095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891109 ] 00:04:20.493 [2024-10-16 06:47:19.869902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.493 [2024-10-16 06:47:19.901640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.065 [2024-10-16 06:47:20.401205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.065 [2024-10-16 06:47:20.433571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:21.065 06:47:20 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.065 06:47:20 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:21.065 06:47:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.065 00:04:21.065 06:47:20 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:21.065 06:47:20 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:21.065 INFO: Checking if target configuration is the same... 00:04:21.065 06:47:20 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.065 06:47:20 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:21.065 06:47:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.065 + '[' 2 -ne 2 ']' 00:04:21.065 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:21.065 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:21.065 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:21.065 +++ basename /dev/fd/62 00:04:21.065 ++ mktemp /tmp/62.XXX 00:04:21.065 + tmp_file_1=/tmp/62.ky4 00:04:21.065 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.065 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.065 + tmp_file_2=/tmp/spdk_tgt_config.json.Sfv 00:04:21.065 + ret=0 00:04:21.065 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:21.326 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:21.588 + diff -u /tmp/62.ky4 /tmp/spdk_tgt_config.json.Sfv 00:04:21.588 + echo 'INFO: JSON config files are the same' 00:04:21.588 INFO: JSON config files are the same 00:04:21.588 + rm /tmp/62.ky4 /tmp/spdk_tgt_config.json.Sfv 00:04:21.588 + exit 0 00:04:21.588 06:47:20 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:21.588 06:47:20 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:21.588 INFO: changing configuration and checking if this can be detected... 00:04:21.588 06:47:20 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.588 06:47:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.588 06:47:21 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.588 06:47:21 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:21.588 06:47:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.588 + '[' 2 -ne 2 ']' 00:04:21.588 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:21.588 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:21.588 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:21.588 +++ basename /dev/fd/62 00:04:21.588 ++ mktemp /tmp/62.XXX 00:04:21.588 + tmp_file_1=/tmp/62.Re5 00:04:21.588 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.588 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.588 + tmp_file_2=/tmp/spdk_tgt_config.json.clu 00:04:21.588 + ret=0 00:04:21.588 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:22.160 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:22.160 + diff -u /tmp/62.Re5 /tmp/spdk_tgt_config.json.clu 00:04:22.160 + ret=1 00:04:22.160 + echo '=== Start of file: /tmp/62.Re5 ===' 00:04:22.160 + cat /tmp/62.Re5 00:04:22.160 + echo '=== End of file: /tmp/62.Re5 ===' 00:04:22.160 + echo '' 00:04:22.160 + echo '=== Start of file: /tmp/spdk_tgt_config.json.clu ===' 00:04:22.160 + cat /tmp/spdk_tgt_config.json.clu 00:04:22.160 + echo '=== End of file: /tmp/spdk_tgt_config.json.clu ===' 00:04:22.160 + echo '' 00:04:22.160 + rm /tmp/62.Re5 /tmp/spdk_tgt_config.json.clu 00:04:22.160 + exit 1 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:22.160 INFO: configuration change detected. 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@324 -- # [[ -n 2891109 ]] 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.160 06:47:21 json_config -- json_config/json_config.sh@330 -- # killprocess 2891109 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@950 -- # '[' -z 2891109 ']' 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@954 -- # kill -0 2891109 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@955 -- # uname 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2891109 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2891109' 00:04:22.160 killing process with pid 2891109 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@969 -- # kill 2891109 00:04:22.160 06:47:21 json_config -- common/autotest_common.sh@974 -- # wait 2891109 00:04:22.421 06:47:21 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.421 06:47:21 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:22.421 06:47:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.421 06:47:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.421 06:47:21 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:22.421 06:47:21 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:22.421 INFO: Success 00:04:22.421 00:04:22.421 real 0m7.430s 00:04:22.421 user 0m8.961s 00:04:22.421 sys 0m2.019s 00:04:22.421 06:47:21 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.421 06:47:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.421 ************************************ 00:04:22.421 END TEST json_config 00:04:22.421 ************************************ 00:04:22.421 06:47:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.421 06:47:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.421 06:47:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.421 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:04:22.683 ************************************ 00:04:22.683 START TEST json_config_extra_key 00:04:22.683 ************************************ 00:04:22.683 06:47:21 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.683 06:47:22 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:22.683 06:47:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:22.683 06:47:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:22.683 06:47:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:22.683 06:47:22 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.683 06:47:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.683 --rc genhtml_branch_coverage=1 00:04:22.683 --rc genhtml_function_coverage=1 00:04:22.683 --rc genhtml_legend=1 00:04:22.683 --rc geninfo_all_blocks=1 00:04:22.683 --rc geninfo_unexecuted_blocks=1 00:04:22.683 00:04:22.683 ' 00:04:22.683 06:47:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.683 --rc genhtml_branch_coverage=1 00:04:22.683 --rc genhtml_function_coverage=1 00:04:22.683 --rc genhtml_legend=1 00:04:22.683 --rc geninfo_all_blocks=1 00:04:22.683 --rc geninfo_unexecuted_blocks=1 00:04:22.683 00:04:22.683 ' 00:04:22.683 06:47:22 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.683 --rc genhtml_branch_coverage=1 00:04:22.683 --rc genhtml_function_coverage=1 00:04:22.683 --rc genhtml_legend=1 00:04:22.683 --rc geninfo_all_blocks=1 00:04:22.683 --rc geninfo_unexecuted_blocks=1 00:04:22.683 00:04:22.683 ' 00:04:22.683 06:47:22 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:22.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.683 --rc genhtml_branch_coverage=1 00:04:22.683 --rc genhtml_function_coverage=1 00:04:22.683 --rc genhtml_legend=1 00:04:22.683 --rc geninfo_all_blocks=1 00:04:22.683 --rc geninfo_unexecuted_blocks=1 00:04:22.683 00:04:22.683 ' 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.683 06:47:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.683 06:47:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.683 06:47:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.683 06:47:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.683 06:47:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:22.683 06:47:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.683 06:47:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:22.683 INFO: launching applications... 00:04:22.683 06:47:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.683 06:47:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:22.683 06:47:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:22.683 06:47:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.684 06:47:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.684 06:47:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.684 06:47:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.684 06:47:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.684 06:47:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2891588 00:04:22.684 06:47:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.684 Waiting for target to run... 00:04:22.684 06:47:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2891588 /var/tmp/spdk_tgt.sock 00:04:22.684 06:47:22 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2891588 ']' 00:04:22.684 06:47:22 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.684 06:47:22 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.684 06:47:22 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.684 06:47:22 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.684 06:47:22 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.684 06:47:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:22.946 [2024-10-16 06:47:22.221074] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:22.946 [2024-10-16 06:47:22.221156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891588 ] 00:04:23.207 [2024-10-16 06:47:22.564825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.207 [2024-10-16 06:47:22.596036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.778 06:47:23 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.778 06:47:23 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:23.778 06:47:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:23.778 00:04:23.778 06:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:23.778 INFO: shutting down applications... 00:04:23.779 06:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:23.779 06:47:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:23.779 06:47:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.779 06:47:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2891588 ]] 00:04:23.779 06:47:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2891588 00:04:23.779 06:47:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.779 06:47:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.779 06:47:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2891588 00:04:23.779 06:47:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.040 06:47:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.040 06:47:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.040 06:47:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2891588 00:04:24.040 06:47:23 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.040 06:47:23 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:24.040 06:47:23 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.040 06:47:23 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.040 SPDK target shutdown done 00:04:24.040 06:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:24.040 Success 00:04:24.040 00:04:24.040 real 0m1.571s 00:04:24.040 user 0m1.105s 00:04:24.040 sys 0m0.470s 00:04:24.040 06:47:23 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.040 06:47:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:24.040 ************************************ 00:04:24.040 END TEST json_config_extra_key 00:04:24.040 ************************************ 00:04:24.301 06:47:23 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.301 06:47:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.301 06:47:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.301 06:47:23 -- common/autotest_common.sh@10 -- # set +x 00:04:24.301 ************************************ 00:04:24.301 START TEST alias_rpc 00:04:24.301 ************************************ 00:04:24.301 06:47:23 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.301 * Looking for test storage... 00:04:24.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:24.301 06:47:23 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:24.301 06:47:23 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:24.301 06:47:23 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:24.301 06:47:23 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:24.301 06:47:23 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.301 06:47:23 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.301 06:47:23 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.301 06:47:23 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.301 06:47:23 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.301 06:47:23 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.301 06:47:23 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.302 06:47:23 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:24.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.302 --rc genhtml_branch_coverage=1 00:04:24.302 --rc genhtml_function_coverage=1 00:04:24.302 --rc genhtml_legend=1 00:04:24.302 --rc geninfo_all_blocks=1 00:04:24.302 --rc geninfo_unexecuted_blocks=1 00:04:24.302 00:04:24.302 ' 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:24.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.302 --rc genhtml_branch_coverage=1 00:04:24.302 --rc genhtml_function_coverage=1 00:04:24.302 --rc genhtml_legend=1 00:04:24.302 --rc geninfo_all_blocks=1 00:04:24.302 --rc geninfo_unexecuted_blocks=1 00:04:24.302 00:04:24.302 ' 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:24.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.302 --rc genhtml_branch_coverage=1 00:04:24.302 --rc genhtml_function_coverage=1 00:04:24.302 --rc genhtml_legend=1 00:04:24.302 --rc geninfo_all_blocks=1 00:04:24.302 --rc geninfo_unexecuted_blocks=1 00:04:24.302 00:04:24.302 ' 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:24.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.302 --rc genhtml_branch_coverage=1 00:04:24.302 --rc genhtml_function_coverage=1 00:04:24.302 --rc genhtml_legend=1 00:04:24.302 --rc geninfo_all_blocks=1 00:04:24.302 --rc geninfo_unexecuted_blocks=1 00:04:24.302 00:04:24.302 ' 00:04:24.302 06:47:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:24.302 06:47:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2891979 00:04:24.302 06:47:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2891979 00:04:24.302 06:47:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2891979 ']' 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.302 06:47:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.562 [2024-10-16 06:47:23.848027] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:24.562 [2024-10-16 06:47:23.848105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891979 ] 00:04:24.562 [2024-10-16 06:47:23.928531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.562 [2024-10-16 06:47:23.964467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:25.502 06:47:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:25.502 06:47:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2891979 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2891979 ']' 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2891979 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2891979 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2891979' 00:04:25.502 killing process with pid 2891979 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@969 -- # kill 2891979 00:04:25.502 06:47:24 alias_rpc -- common/autotest_common.sh@974 -- # wait 2891979 00:04:25.763 00:04:25.763 real 0m1.492s 00:04:25.763 user 0m1.632s 00:04:25.763 sys 0m0.424s 00:04:25.763 06:47:25 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.763 06:47:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.763 ************************************ 00:04:25.763 END TEST alias_rpc 00:04:25.763 ************************************ 00:04:25.763 06:47:25 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:25.763 06:47:25 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:25.763 06:47:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.763 06:47:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.763 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:04:25.763 ************************************ 00:04:25.763 START TEST spdkcli_tcp 00:04:25.763 ************************************ 00:04:25.763 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:25.763 * Looking for test storage... 00:04:25.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:25.763 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.024 06:47:25 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:26.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.024 --rc genhtml_branch_coverage=1 00:04:26.024 --rc genhtml_function_coverage=1 00:04:26.024 --rc genhtml_legend=1 00:04:26.024 --rc geninfo_all_blocks=1 00:04:26.024 --rc geninfo_unexecuted_blocks=1 00:04:26.024 00:04:26.024 ' 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:26.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.024 --rc genhtml_branch_coverage=1 00:04:26.024 --rc genhtml_function_coverage=1 00:04:26.024 --rc genhtml_legend=1 00:04:26.024 --rc geninfo_all_blocks=1 00:04:26.024 --rc geninfo_unexecuted_blocks=1 00:04:26.024 00:04:26.024 ' 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:26.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.024 --rc genhtml_branch_coverage=1 00:04:26.024 --rc genhtml_function_coverage=1 00:04:26.024 --rc genhtml_legend=1 00:04:26.024 --rc geninfo_all_blocks=1 00:04:26.024 --rc geninfo_unexecuted_blocks=1 00:04:26.024 00:04:26.024 ' 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:26.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.024 --rc genhtml_branch_coverage=1 00:04:26.024 --rc genhtml_function_coverage=1 00:04:26.024 --rc genhtml_legend=1 00:04:26.024 --rc geninfo_all_blocks=1 00:04:26.024 --rc geninfo_unexecuted_blocks=1 00:04:26.024 00:04:26.024 ' 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2892376 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2892376 00:04:26.024 06:47:25 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2892376 ']' 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.024 06:47:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.024 [2024-10-16 06:47:25.423396] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:26.024 [2024-10-16 06:47:25.423447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892376 ] 00:04:26.024 [2024-10-16 06:47:25.498198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.285 [2024-10-16 06:47:25.530734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.285 [2024-10-16 06:47:25.530734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.856 06:47:26 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.856 06:47:26 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:26.856 06:47:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2892707 00:04:26.856 06:47:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:26.856 06:47:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:27.117 [ 00:04:27.117 "bdev_malloc_delete", 00:04:27.117 "bdev_malloc_create", 00:04:27.117 "bdev_null_resize", 00:04:27.117 "bdev_null_delete", 00:04:27.117 "bdev_null_create", 00:04:27.117 "bdev_nvme_cuse_unregister", 00:04:27.117 "bdev_nvme_cuse_register", 00:04:27.117 "bdev_opal_new_user", 00:04:27.117 "bdev_opal_set_lock_state", 00:04:27.117 "bdev_opal_delete", 00:04:27.117 "bdev_opal_get_info", 00:04:27.117 "bdev_opal_create", 00:04:27.117 "bdev_nvme_opal_revert", 00:04:27.117 "bdev_nvme_opal_init", 00:04:27.117 "bdev_nvme_send_cmd", 00:04:27.117 "bdev_nvme_set_keys", 00:04:27.117 "bdev_nvme_get_path_iostat", 00:04:27.117 "bdev_nvme_get_mdns_discovery_info", 00:04:27.117 "bdev_nvme_stop_mdns_discovery", 00:04:27.117 "bdev_nvme_start_mdns_discovery", 00:04:27.117 "bdev_nvme_set_multipath_policy", 00:04:27.117 "bdev_nvme_set_preferred_path", 00:04:27.117 "bdev_nvme_get_io_paths", 00:04:27.117 "bdev_nvme_remove_error_injection", 00:04:27.117 "bdev_nvme_add_error_injection", 00:04:27.117 "bdev_nvme_get_discovery_info", 00:04:27.117 "bdev_nvme_stop_discovery", 00:04:27.117 "bdev_nvme_start_discovery", 00:04:27.117 "bdev_nvme_get_controller_health_info", 00:04:27.117 "bdev_nvme_disable_controller", 00:04:27.117 "bdev_nvme_enable_controller", 00:04:27.117 "bdev_nvme_reset_controller", 00:04:27.117 "bdev_nvme_get_transport_statistics", 00:04:27.117 "bdev_nvme_apply_firmware", 00:04:27.117 "bdev_nvme_detach_controller", 00:04:27.117 "bdev_nvme_get_controllers", 00:04:27.117 "bdev_nvme_attach_controller", 00:04:27.117 "bdev_nvme_set_hotplug", 00:04:27.117 "bdev_nvme_set_options", 00:04:27.117 "bdev_passthru_delete", 00:04:27.117 "bdev_passthru_create", 00:04:27.117 "bdev_lvol_set_parent_bdev", 00:04:27.117 "bdev_lvol_set_parent", 00:04:27.117 "bdev_lvol_check_shallow_copy", 00:04:27.117 "bdev_lvol_start_shallow_copy", 00:04:27.117 "bdev_lvol_grow_lvstore", 00:04:27.117 "bdev_lvol_get_lvols", 00:04:27.117 "bdev_lvol_get_lvstores", 00:04:27.117 "bdev_lvol_delete", 00:04:27.117 "bdev_lvol_set_read_only", 00:04:27.117 "bdev_lvol_resize", 00:04:27.117 "bdev_lvol_decouple_parent", 00:04:27.117 "bdev_lvol_inflate", 00:04:27.117 "bdev_lvol_rename", 00:04:27.117 "bdev_lvol_clone_bdev", 00:04:27.117 "bdev_lvol_clone", 00:04:27.117 "bdev_lvol_snapshot", 00:04:27.117 "bdev_lvol_create", 00:04:27.117 "bdev_lvol_delete_lvstore", 00:04:27.117 "bdev_lvol_rename_lvstore", 00:04:27.118 "bdev_lvol_create_lvstore", 00:04:27.118 "bdev_raid_set_options", 00:04:27.118 "bdev_raid_remove_base_bdev", 00:04:27.118 "bdev_raid_add_base_bdev", 00:04:27.118 "bdev_raid_delete", 00:04:27.118 "bdev_raid_create", 00:04:27.118 "bdev_raid_get_bdevs", 00:04:27.118 "bdev_error_inject_error", 00:04:27.118 "bdev_error_delete", 00:04:27.118 "bdev_error_create", 00:04:27.118 "bdev_split_delete", 00:04:27.118 "bdev_split_create", 00:04:27.118 "bdev_delay_delete", 00:04:27.118 "bdev_delay_create", 00:04:27.118 "bdev_delay_update_latency", 00:04:27.118 "bdev_zone_block_delete", 00:04:27.118 "bdev_zone_block_create", 00:04:27.118 "blobfs_create", 00:04:27.118 "blobfs_detect", 00:04:27.118 "blobfs_set_cache_size", 00:04:27.118 "bdev_aio_delete", 00:04:27.118 "bdev_aio_rescan", 00:04:27.118 "bdev_aio_create", 00:04:27.118 "bdev_ftl_set_property", 00:04:27.118 "bdev_ftl_get_properties", 00:04:27.118 "bdev_ftl_get_stats", 00:04:27.118 "bdev_ftl_unmap", 00:04:27.118 "bdev_ftl_unload", 00:04:27.118 "bdev_ftl_delete", 00:04:27.118 "bdev_ftl_load", 00:04:27.118 "bdev_ftl_create", 00:04:27.118 "bdev_virtio_attach_controller", 00:04:27.118 "bdev_virtio_scsi_get_devices", 00:04:27.118 "bdev_virtio_detach_controller", 00:04:27.118 "bdev_virtio_blk_set_hotplug", 00:04:27.118 "bdev_iscsi_delete", 00:04:27.118 "bdev_iscsi_create", 00:04:27.118 "bdev_iscsi_set_options", 00:04:27.118 "accel_error_inject_error", 00:04:27.118 "ioat_scan_accel_module", 00:04:27.118 "dsa_scan_accel_module", 00:04:27.118 "iaa_scan_accel_module", 00:04:27.118 "vfu_virtio_create_fs_endpoint", 00:04:27.118 "vfu_virtio_create_scsi_endpoint", 00:04:27.118 "vfu_virtio_scsi_remove_target", 00:04:27.118 "vfu_virtio_scsi_add_target", 00:04:27.118 "vfu_virtio_create_blk_endpoint", 00:04:27.118 "vfu_virtio_delete_endpoint", 00:04:27.118 "keyring_file_remove_key", 00:04:27.118 "keyring_file_add_key", 00:04:27.118 "keyring_linux_set_options", 00:04:27.118 "fsdev_aio_delete", 00:04:27.118 "fsdev_aio_create", 00:04:27.118 "iscsi_get_histogram", 00:04:27.118 "iscsi_enable_histogram", 00:04:27.118 "iscsi_set_options", 00:04:27.118 "iscsi_get_auth_groups", 00:04:27.118 "iscsi_auth_group_remove_secret", 00:04:27.118 "iscsi_auth_group_add_secret", 00:04:27.118 "iscsi_delete_auth_group", 00:04:27.118 "iscsi_create_auth_group", 00:04:27.118 "iscsi_set_discovery_auth", 00:04:27.118 "iscsi_get_options", 00:04:27.118 "iscsi_target_node_request_logout", 00:04:27.118 "iscsi_target_node_set_redirect", 00:04:27.118 "iscsi_target_node_set_auth", 00:04:27.118 "iscsi_target_node_add_lun", 00:04:27.118 "iscsi_get_stats", 00:04:27.118 "iscsi_get_connections", 00:04:27.118 "iscsi_portal_group_set_auth", 00:04:27.118 "iscsi_start_portal_group", 00:04:27.118 "iscsi_delete_portal_group", 00:04:27.118 "iscsi_create_portal_group", 00:04:27.118 "iscsi_get_portal_groups", 00:04:27.118 "iscsi_delete_target_node", 00:04:27.118 "iscsi_target_node_remove_pg_ig_maps", 00:04:27.118 "iscsi_target_node_add_pg_ig_maps", 00:04:27.118 "iscsi_create_target_node", 00:04:27.118 "iscsi_get_target_nodes", 00:04:27.118 "iscsi_delete_initiator_group", 00:04:27.118 "iscsi_initiator_group_remove_initiators", 00:04:27.118 "iscsi_initiator_group_add_initiators", 00:04:27.118 "iscsi_create_initiator_group", 00:04:27.118 "iscsi_get_initiator_groups", 00:04:27.118 "nvmf_set_crdt", 00:04:27.118 "nvmf_set_config", 00:04:27.118 "nvmf_set_max_subsystems", 00:04:27.118 "nvmf_stop_mdns_prr", 00:04:27.118 "nvmf_publish_mdns_prr", 00:04:27.118 "nvmf_subsystem_get_listeners", 00:04:27.118 "nvmf_subsystem_get_qpairs", 00:04:27.118 "nvmf_subsystem_get_controllers", 00:04:27.118 "nvmf_get_stats", 00:04:27.118 "nvmf_get_transports", 00:04:27.118 "nvmf_create_transport", 00:04:27.118 "nvmf_get_targets", 00:04:27.118 "nvmf_delete_target", 00:04:27.118 "nvmf_create_target", 00:04:27.118 "nvmf_subsystem_allow_any_host", 00:04:27.118 "nvmf_subsystem_set_keys", 00:04:27.118 "nvmf_subsystem_remove_host", 00:04:27.118 "nvmf_subsystem_add_host", 00:04:27.118 "nvmf_ns_remove_host", 00:04:27.118 "nvmf_ns_add_host", 00:04:27.118 "nvmf_subsystem_remove_ns", 00:04:27.118 "nvmf_subsystem_set_ns_ana_group", 00:04:27.118 "nvmf_subsystem_add_ns", 00:04:27.118 "nvmf_subsystem_listener_set_ana_state", 00:04:27.118 "nvmf_discovery_get_referrals", 00:04:27.118 "nvmf_discovery_remove_referral", 00:04:27.118 "nvmf_discovery_add_referral", 00:04:27.118 "nvmf_subsystem_remove_listener", 00:04:27.118 "nvmf_subsystem_add_listener", 00:04:27.118 "nvmf_delete_subsystem", 00:04:27.118 "nvmf_create_subsystem", 00:04:27.118 "nvmf_get_subsystems", 00:04:27.118 "env_dpdk_get_mem_stats", 00:04:27.118 "nbd_get_disks", 00:04:27.118 "nbd_stop_disk", 00:04:27.118 "nbd_start_disk", 00:04:27.118 "ublk_recover_disk", 00:04:27.118 "ublk_get_disks", 00:04:27.118 "ublk_stop_disk", 00:04:27.118 "ublk_start_disk", 00:04:27.118 "ublk_destroy_target", 00:04:27.118 "ublk_create_target", 00:04:27.118 "virtio_blk_create_transport", 00:04:27.118 "virtio_blk_get_transports", 00:04:27.118 "vhost_controller_set_coalescing", 00:04:27.118 "vhost_get_controllers", 00:04:27.118 "vhost_delete_controller", 00:04:27.118 "vhost_create_blk_controller", 00:04:27.118 "vhost_scsi_controller_remove_target", 00:04:27.118 "vhost_scsi_controller_add_target", 00:04:27.118 "vhost_start_scsi_controller", 00:04:27.118 "vhost_create_scsi_controller", 00:04:27.118 "thread_set_cpumask", 00:04:27.118 "scheduler_set_options", 00:04:27.118 "framework_get_governor", 00:04:27.118 "framework_get_scheduler", 00:04:27.118 "framework_set_scheduler", 00:04:27.118 "framework_get_reactors", 00:04:27.118 "thread_get_io_channels", 00:04:27.118 "thread_get_pollers", 00:04:27.118 "thread_get_stats", 00:04:27.118 "framework_monitor_context_switch", 00:04:27.118 "spdk_kill_instance", 00:04:27.118 "log_enable_timestamps", 00:04:27.118 "log_get_flags", 00:04:27.118 "log_clear_flag", 00:04:27.118 "log_set_flag", 00:04:27.118 "log_get_level", 00:04:27.118 "log_set_level", 00:04:27.118 "log_get_print_level", 00:04:27.118 "log_set_print_level", 00:04:27.118 "framework_enable_cpumask_locks", 00:04:27.118 "framework_disable_cpumask_locks", 00:04:27.118 "framework_wait_init", 00:04:27.118 "framework_start_init", 00:04:27.118 "scsi_get_devices", 00:04:27.118 "bdev_get_histogram", 00:04:27.118 "bdev_enable_histogram", 00:04:27.118 "bdev_set_qos_limit", 00:04:27.118 "bdev_set_qd_sampling_period", 00:04:27.118 "bdev_get_bdevs", 00:04:27.118 "bdev_reset_iostat", 00:04:27.118 "bdev_get_iostat", 00:04:27.118 "bdev_examine", 00:04:27.118 "bdev_wait_for_examine", 00:04:27.118 "bdev_set_options", 00:04:27.118 "accel_get_stats", 00:04:27.118 "accel_set_options", 00:04:27.118 "accel_set_driver", 00:04:27.118 "accel_crypto_key_destroy", 00:04:27.118 "accel_crypto_keys_get", 00:04:27.118 "accel_crypto_key_create", 00:04:27.118 "accel_assign_opc", 00:04:27.118 "accel_get_module_info", 00:04:27.118 "accel_get_opc_assignments", 00:04:27.118 "vmd_rescan", 00:04:27.118 "vmd_remove_device", 00:04:27.118 "vmd_enable", 00:04:27.118 "sock_get_default_impl", 00:04:27.118 "sock_set_default_impl", 00:04:27.118 "sock_impl_set_options", 00:04:27.118 "sock_impl_get_options", 00:04:27.118 "iobuf_get_stats", 00:04:27.118 "iobuf_set_options", 00:04:27.118 "keyring_get_keys", 00:04:27.118 "vfu_tgt_set_base_path", 00:04:27.118 "framework_get_pci_devices", 00:04:27.118 "framework_get_config", 00:04:27.118 "framework_get_subsystems", 00:04:27.118 "fsdev_set_opts", 00:04:27.118 "fsdev_get_opts", 00:04:27.118 "trace_get_info", 00:04:27.118 "trace_get_tpoint_group_mask", 00:04:27.118 "trace_disable_tpoint_group", 00:04:27.118 "trace_enable_tpoint_group", 00:04:27.118 "trace_clear_tpoint_mask", 00:04:27.118 "trace_set_tpoint_mask", 00:04:27.118 "notify_get_notifications", 00:04:27.118 "notify_get_types", 00:04:27.118 "spdk_get_version", 00:04:27.118 "rpc_get_methods" 00:04:27.118 ] 00:04:27.118 06:47:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.118 06:47:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:27.118 06:47:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2892376 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2892376 ']' 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2892376 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2892376 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2892376' 00:04:27.118 killing process with pid 2892376 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2892376 00:04:27.118 06:47:26 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2892376 00:04:27.379 00:04:27.379 real 0m1.532s 00:04:27.379 user 0m2.821s 00:04:27.379 sys 0m0.465s 00:04:27.379 06:47:26 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.379 06:47:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.379 ************************************ 00:04:27.379 END TEST spdkcli_tcp 00:04:27.379 ************************************ 00:04:27.379 06:47:26 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.379 06:47:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.379 06:47:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.379 06:47:26 -- common/autotest_common.sh@10 -- # set +x 00:04:27.379 ************************************ 00:04:27.379 START TEST dpdk_mem_utility 00:04:27.379 ************************************ 00:04:27.379 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.379 * Looking for test storage... 00:04:27.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:27.379 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.379 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.379 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.640 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.640 06:47:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.640 06:47:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.640 06:47:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.640 06:47:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.640 06:47:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.640 06:47:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.641 06:47:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.641 --rc genhtml_branch_coverage=1 00:04:27.641 --rc genhtml_function_coverage=1 00:04:27.641 --rc genhtml_legend=1 00:04:27.641 --rc geninfo_all_blocks=1 00:04:27.641 --rc geninfo_unexecuted_blocks=1 00:04:27.641 00:04:27.641 ' 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.641 --rc genhtml_branch_coverage=1 00:04:27.641 --rc genhtml_function_coverage=1 00:04:27.641 --rc genhtml_legend=1 00:04:27.641 --rc geninfo_all_blocks=1 00:04:27.641 --rc geninfo_unexecuted_blocks=1 00:04:27.641 00:04:27.641 ' 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.641 --rc genhtml_branch_coverage=1 00:04:27.641 --rc genhtml_function_coverage=1 00:04:27.641 --rc genhtml_legend=1 00:04:27.641 --rc geninfo_all_blocks=1 00:04:27.641 --rc geninfo_unexecuted_blocks=1 00:04:27.641 00:04:27.641 ' 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.641 --rc genhtml_branch_coverage=1 00:04:27.641 --rc genhtml_function_coverage=1 00:04:27.641 --rc genhtml_legend=1 00:04:27.641 --rc geninfo_all_blocks=1 00:04:27.641 --rc geninfo_unexecuted_blocks=1 00:04:27.641 00:04:27.641 ' 00:04:27.641 06:47:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:27.641 06:47:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2892791 00:04:27.641 06:47:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2892791 00:04:27.641 06:47:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2892791 ']' 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.641 06:47:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:27.641 [2024-10-16 06:47:27.034512] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:27.641 [2024-10-16 06:47:27.034589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892791 ] 00:04:27.641 [2024-10-16 06:47:27.113419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.902 [2024-10-16 06:47:27.150056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.476 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.476 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:28.476 06:47:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.476 06:47:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.476 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.476 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.476 { 00:04:28.476 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.476 } 00:04:28.476 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.476 06:47:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:28.476 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:28.476 1 heaps totaling size 810.000000 MiB 00:04:28.476 size: 810.000000 MiB heap id: 0 00:04:28.476 end heaps---------- 00:04:28.476 9 mempools totaling size 595.772034 MiB 00:04:28.476 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:28.476 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:28.476 size: 92.545471 MiB name: bdev_io_2892791 00:04:28.476 size: 50.003479 MiB name: msgpool_2892791 00:04:28.476 size: 36.509338 MiB name: fsdev_io_2892791 00:04:28.476 size: 21.763794 MiB name: PDU_Pool 00:04:28.476 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:28.476 size: 4.133484 MiB name: evtpool_2892791 00:04:28.476 size: 0.026123 MiB name: Session_Pool 00:04:28.476 end mempools------- 00:04:28.476 6 memzones totaling size 4.142822 MiB 00:04:28.476 size: 1.000366 MiB name: RG_ring_0_2892791 00:04:28.476 size: 1.000366 MiB name: RG_ring_1_2892791 00:04:28.476 size: 1.000366 MiB name: RG_ring_4_2892791 00:04:28.476 size: 1.000366 MiB name: RG_ring_5_2892791 00:04:28.476 size: 0.125366 MiB name: RG_ring_2_2892791 00:04:28.476 size: 0.015991 MiB name: RG_ring_3_2892791 00:04:28.476 end memzones------- 00:04:28.476 06:47:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:28.476 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:28.476 list of free elements. size: 10.862488 MiB 00:04:28.476 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:28.476 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:28.476 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:28.476 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:28.476 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:28.476 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:28.476 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:28.476 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:28.476 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:28.476 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:28.476 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:28.476 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:28.476 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:28.476 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:28.476 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:28.476 list of standard malloc elements. size: 199.218628 MiB 00:04:28.476 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:28.476 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:28.476 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:28.476 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:28.476 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:28.476 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:28.476 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:28.476 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:28.476 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:28.476 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:28.476 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:28.476 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:28.476 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:28.476 list of memzone associated elements. size: 599.918884 MiB 00:04:28.476 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:28.476 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:28.476 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:28.476 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:28.476 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:28.476 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2892791_0 00:04:28.476 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:28.476 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2892791_0 00:04:28.476 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:28.476 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2892791_0 00:04:28.477 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:28.477 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:28.477 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:28.477 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:28.477 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:28.477 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2892791_0 00:04:28.477 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:28.477 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2892791 00:04:28.477 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:28.477 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2892791 00:04:28.477 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:28.477 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:28.477 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:28.477 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:28.477 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:28.477 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:28.477 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:28.477 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:28.477 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:28.477 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2892791 00:04:28.477 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:28.477 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2892791 00:04:28.477 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:28.477 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2892791 00:04:28.477 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:28.477 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2892791 00:04:28.477 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:28.477 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2892791 00:04:28.477 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:28.477 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2892791 00:04:28.477 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:28.477 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:28.477 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:28.477 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:28.477 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:28.477 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:28.477 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:28.477 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2892791 00:04:28.477 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:28.477 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2892791 00:04:28.477 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:28.477 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:28.477 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:28.477 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:28.477 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:28.477 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2892791 00:04:28.477 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:28.477 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:28.477 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:28.477 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2892791 00:04:28.477 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:28.477 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2892791 00:04:28.477 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:28.477 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2892791 00:04:28.477 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:28.477 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:28.477 06:47:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:28.477 06:47:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2892791 00:04:28.477 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2892791 ']' 00:04:28.477 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2892791 00:04:28.477 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:28.477 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.477 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2892791 00:04:28.739 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.739 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.739 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2892791' 00:04:28.739 killing process with pid 2892791 00:04:28.739 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2892791 00:04:28.739 06:47:27 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2892791 00:04:28.739 00:04:28.739 real 0m1.405s 00:04:28.739 user 0m1.482s 00:04:28.739 sys 0m0.417s 00:04:28.739 06:47:28 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.739 06:47:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.739 ************************************ 00:04:28.739 END TEST dpdk_mem_utility 00:04:28.739 ************************************ 00:04:28.739 06:47:28 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:28.739 06:47:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.739 06:47:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.739 06:47:28 -- common/autotest_common.sh@10 -- # set +x 00:04:29.000 ************************************ 00:04:29.000 START TEST event 00:04:29.000 ************************************ 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:29.000 * Looking for test storage... 00:04:29.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:29.000 06:47:28 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.000 06:47:28 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.000 06:47:28 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.000 06:47:28 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.000 06:47:28 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.000 06:47:28 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.000 06:47:28 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.000 06:47:28 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.000 06:47:28 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.000 06:47:28 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.000 06:47:28 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.000 06:47:28 event -- scripts/common.sh@344 -- # case "$op" in 00:04:29.000 06:47:28 event -- scripts/common.sh@345 -- # : 1 00:04:29.000 06:47:28 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.000 06:47:28 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.000 06:47:28 event -- scripts/common.sh@365 -- # decimal 1 00:04:29.000 06:47:28 event -- scripts/common.sh@353 -- # local d=1 00:04:29.000 06:47:28 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.000 06:47:28 event -- scripts/common.sh@355 -- # echo 1 00:04:29.000 06:47:28 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.000 06:47:28 event -- scripts/common.sh@366 -- # decimal 2 00:04:29.000 06:47:28 event -- scripts/common.sh@353 -- # local d=2 00:04:29.000 06:47:28 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.000 06:47:28 event -- scripts/common.sh@355 -- # echo 2 00:04:29.000 06:47:28 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.000 06:47:28 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.000 06:47:28 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.000 06:47:28 event -- scripts/common.sh@368 -- # return 0 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.000 --rc genhtml_branch_coverage=1 00:04:29.000 --rc genhtml_function_coverage=1 00:04:29.000 --rc genhtml_legend=1 00:04:29.000 --rc geninfo_all_blocks=1 00:04:29.000 --rc geninfo_unexecuted_blocks=1 00:04:29.000 00:04:29.000 ' 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.000 --rc genhtml_branch_coverage=1 00:04:29.000 --rc genhtml_function_coverage=1 00:04:29.000 --rc genhtml_legend=1 00:04:29.000 --rc geninfo_all_blocks=1 00:04:29.000 --rc geninfo_unexecuted_blocks=1 00:04:29.000 00:04:29.000 ' 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.000 --rc genhtml_branch_coverage=1 00:04:29.000 --rc genhtml_function_coverage=1 00:04:29.000 --rc genhtml_legend=1 00:04:29.000 --rc geninfo_all_blocks=1 00:04:29.000 --rc geninfo_unexecuted_blocks=1 00:04:29.000 00:04:29.000 ' 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.000 --rc genhtml_branch_coverage=1 00:04:29.000 --rc genhtml_function_coverage=1 00:04:29.000 --rc genhtml_legend=1 00:04:29.000 --rc geninfo_all_blocks=1 00:04:29.000 --rc geninfo_unexecuted_blocks=1 00:04:29.000 00:04:29.000 ' 00:04:29.000 06:47:28 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:29.000 06:47:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.000 06:47:28 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:29.000 06:47:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.000 06:47:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.000 ************************************ 00:04:29.000 START TEST event_perf 00:04:29.000 ************************************ 00:04:29.000 06:47:28 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.000 Running I/O for 1 seconds...[2024-10-16 06:47:28.477319] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:29.000 [2024-10-16 06:47:28.477415] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893192 ] 00:04:29.261 [2024-10-16 06:47:28.559396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.261 [2024-10-16 06:47:28.602890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.261 [2024-10-16 06:47:28.602978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.261 [2024-10-16 06:47:28.603133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.261 Running I/O for 1 seconds...[2024-10-16 06:47:28.603134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:30.206 00:04:30.206 lcore 0: 176376 00:04:30.206 lcore 1: 176378 00:04:30.206 lcore 2: 176378 00:04:30.206 lcore 3: 176378 00:04:30.206 done. 00:04:30.206 00:04:30.206 real 0m1.175s 00:04:30.206 user 0m4.086s 00:04:30.206 sys 0m0.085s 00:04:30.206 06:47:29 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.206 06:47:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.206 ************************************ 00:04:30.206 END TEST event_perf 00:04:30.206 ************************************ 00:04:30.206 06:47:29 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.206 06:47:29 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:30.206 06:47:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.206 06:47:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.467 ************************************ 00:04:30.467 START TEST event_reactor 00:04:30.467 ************************************ 00:04:30.467 06:47:29 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.467 [2024-10-16 06:47:29.730509] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:30.467 [2024-10-16 06:47:29.730610] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893545 ] 00:04:30.467 [2024-10-16 06:47:29.817996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.467 [2024-10-16 06:47:29.848682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.409 test_start 00:04:31.409 oneshot 00:04:31.409 tick 100 00:04:31.409 tick 100 00:04:31.409 tick 250 00:04:31.409 tick 100 00:04:31.409 tick 100 00:04:31.409 tick 100 00:04:31.409 tick 250 00:04:31.409 tick 500 00:04:31.409 tick 100 00:04:31.409 tick 100 00:04:31.409 tick 250 00:04:31.409 tick 100 00:04:31.409 tick 100 00:04:31.409 test_end 00:04:31.409 00:04:31.409 real 0m1.166s 00:04:31.409 user 0m1.090s 00:04:31.409 sys 0m0.073s 00:04:31.409 06:47:30 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.409 06:47:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:31.409 ************************************ 00:04:31.409 END TEST event_reactor 00:04:31.409 ************************************ 00:04:31.671 06:47:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.671 06:47:30 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:31.671 06:47:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.671 06:47:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.671 ************************************ 00:04:31.671 START TEST event_reactor_perf 00:04:31.671 ************************************ 00:04:31.671 06:47:30 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.671 [2024-10-16 06:47:30.979067] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:31.671 [2024-10-16 06:47:30.979166] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893749 ] 00:04:31.671 [2024-10-16 06:47:31.061839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.671 [2024-10-16 06:47:31.099638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.056 test_start 00:04:33.056 test_end 00:04:33.056 Performance: 540680 events per second 00:04:33.056 00:04:33.056 real 0m1.169s 00:04:33.056 user 0m1.087s 00:04:33.056 sys 0m0.079s 00:04:33.056 06:47:32 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.056 06:47:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.056 ************************************ 00:04:33.056 END TEST event_reactor_perf 00:04:33.056 ************************************ 00:04:33.056 06:47:32 event -- event/event.sh@49 -- # uname -s 00:04:33.056 06:47:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.056 06:47:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:33.056 06:47:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.056 06:47:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.056 06:47:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.056 ************************************ 00:04:33.056 START TEST event_scheduler 00:04:33.056 ************************************ 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:33.056 * Looking for test storage... 00:04:33.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.056 06:47:32 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:33.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.056 --rc genhtml_branch_coverage=1 00:04:33.056 --rc genhtml_function_coverage=1 00:04:33.056 --rc genhtml_legend=1 00:04:33.056 --rc geninfo_all_blocks=1 00:04:33.056 --rc geninfo_unexecuted_blocks=1 00:04:33.056 00:04:33.056 ' 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:33.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.056 --rc genhtml_branch_coverage=1 00:04:33.056 --rc genhtml_function_coverage=1 00:04:33.056 --rc genhtml_legend=1 00:04:33.056 --rc geninfo_all_blocks=1 00:04:33.056 --rc geninfo_unexecuted_blocks=1 00:04:33.056 00:04:33.056 ' 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:33.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.056 --rc genhtml_branch_coverage=1 00:04:33.056 --rc genhtml_function_coverage=1 00:04:33.056 --rc genhtml_legend=1 00:04:33.056 --rc geninfo_all_blocks=1 00:04:33.056 --rc geninfo_unexecuted_blocks=1 00:04:33.056 00:04:33.056 ' 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:33.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.056 --rc genhtml_branch_coverage=1 00:04:33.056 --rc genhtml_function_coverage=1 00:04:33.056 --rc genhtml_legend=1 00:04:33.056 --rc geninfo_all_blocks=1 00:04:33.056 --rc geninfo_unexecuted_blocks=1 00:04:33.056 00:04:33.056 ' 00:04:33.056 06:47:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.056 06:47:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2894017 00:04:33.056 06:47:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.056 06:47:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2894017 00:04:33.056 06:47:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2894017 ']' 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.056 06:47:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.056 [2024-10-16 06:47:32.462395] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:33.056 [2024-10-16 06:47:32.462471] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894017 ] 00:04:33.056 [2024-10-16 06:47:32.543624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.317 [2024-10-16 06:47:32.599451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.317 [2024-10-16 06:47:32.599612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.317 [2024-10-16 06:47:32.599774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.317 [2024-10-16 06:47:32.599775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:33.890 06:47:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.890 [2024-10-16 06:47:33.282206] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:33.890 [2024-10-16 06:47:33.282223] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:33.890 [2024-10-16 06:47:33.282233] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:33.890 [2024-10-16 06:47:33.282239] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:33.890 [2024-10-16 06:47:33.282244] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.890 06:47:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.890 [2024-10-16 06:47:33.344223] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.890 06:47:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.890 06:47:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.890 ************************************ 00:04:33.890 START TEST scheduler_create_thread 00:04:33.890 ************************************ 00:04:33.890 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:33.890 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:33.890 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.890 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.152 2 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.152 3 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.152 4 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.152 5 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.152 6 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.152 7 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.152 8 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.152 9 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.152 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.725 10 00:04:34.725 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.725 06:47:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:34.725 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.725 06:47:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.109 06:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.109 06:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:36.109 06:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:36.109 06:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.109 06:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.680 06:47:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.680 06:47:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:36.680 06:47:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.680 06:47:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.621 06:47:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.621 06:47:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:37.621 06:47:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:37.621 06:47:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.621 06:47:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.192 06:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.192 00:04:38.192 real 0m4.224s 00:04:38.192 user 0m0.023s 00:04:38.192 sys 0m0.009s 00:04:38.192 06:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.192 06:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.192 ************************************ 00:04:38.192 END TEST scheduler_create_thread 00:04:38.192 ************************************ 00:04:38.192 06:47:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:38.192 06:47:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2894017 00:04:38.192 06:47:37 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2894017 ']' 00:04:38.192 06:47:37 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2894017 00:04:38.192 06:47:37 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:38.192 06:47:37 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.192 06:47:37 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2894017 00:04:38.454 06:47:37 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:38.454 06:47:37 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:38.454 06:47:37 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2894017' 00:04:38.454 killing process with pid 2894017 00:04:38.454 06:47:37 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2894017 00:04:38.454 06:47:37 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2894017 00:04:38.714 [2024-10-16 06:47:37.986035] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:38.714 00:04:38.714 real 0m5.933s 00:04:38.714 user 0m13.859s 00:04:38.714 sys 0m0.442s 00:04:38.714 06:47:38 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.714 06:47:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.714 ************************************ 00:04:38.714 END TEST event_scheduler 00:04:38.714 ************************************ 00:04:38.714 06:47:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:38.714 06:47:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:38.714 06:47:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.714 06:47:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.714 06:47:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.976 ************************************ 00:04:38.976 START TEST app_repeat 00:04:38.976 ************************************ 00:04:38.976 06:47:38 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2895353 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2895353' 00:04:38.976 Process app_repeat pid: 2895353 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:38.976 spdk_app_start Round 0 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2895353 /var/tmp/spdk-nbd.sock 00:04:38.976 06:47:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2895353 ']' 00:04:38.976 06:47:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.976 06:47:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.976 06:47:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.976 06:47:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.976 06:47:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.976 [2024-10-16 06:47:38.265793] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:38.976 [2024-10-16 06:47:38.265862] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895353 ] 00:04:38.976 [2024-10-16 06:47:38.341037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.976 [2024-10-16 06:47:38.372260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.976 [2024-10-16 06:47:38.372261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.976 06:47:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.976 06:47:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:38.976 06:47:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.237 Malloc0 00:04:39.237 06:47:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.498 Malloc1 00:04:39.498 06:47:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.498 06:47:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.759 /dev/nbd0 00:04:39.759 06:47:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.759 06:47:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.759 1+0 records in 00:04:39.759 1+0 records out 00:04:39.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0033434 s, 1.2 MB/s 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:39.759 06:47:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:39.759 06:47:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.759 06:47:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.759 06:47:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.759 /dev/nbd1 00:04:40.020 06:47:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.020 06:47:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.020 1+0 records in 00:04:40.020 1+0 records out 00:04:40.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274607 s, 14.9 MB/s 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:40.020 06:47:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:40.020 06:47:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.020 06:47:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.020 06:47:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.020 06:47:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.020 06:47:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.020 06:47:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.020 { 00:04:40.020 "nbd_device": "/dev/nbd0", 00:04:40.020 "bdev_name": "Malloc0" 00:04:40.020 }, 00:04:40.020 { 00:04:40.020 "nbd_device": "/dev/nbd1", 00:04:40.020 "bdev_name": "Malloc1" 00:04:40.020 } 00:04:40.020 ]' 00:04:40.021 06:47:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.021 { 00:04:40.021 "nbd_device": "/dev/nbd0", 00:04:40.021 "bdev_name": "Malloc0" 00:04:40.021 }, 00:04:40.021 { 00:04:40.021 "nbd_device": "/dev/nbd1", 00:04:40.021 "bdev_name": "Malloc1" 00:04:40.021 } 00:04:40.021 ]' 00:04:40.021 06:47:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.280 /dev/nbd1' 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.280 /dev/nbd1' 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.280 06:47:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.280 256+0 records in 00:04:40.280 256+0 records out 00:04:40.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123628 s, 84.8 MB/s 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.281 256+0 records in 00:04:40.281 256+0 records out 00:04:40.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119675 s, 87.6 MB/s 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.281 256+0 records in 00:04:40.281 256+0 records out 00:04:40.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130079 s, 80.6 MB/s 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.281 06:47:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.541 06:47:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.542 06:47:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.542 06:47:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.542 06:47:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.542 06:47:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.542 06:47:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.542 06:47:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:40.803 06:47:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:40.803 06:47:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.064 06:47:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.064 [2024-10-16 06:47:40.501705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.064 [2024-10-16 06:47:40.532110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.064 [2024-10-16 06:47:40.532110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.064 [2024-10-16 06:47:40.560888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.064 [2024-10-16 06:47:40.560920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.364 06:47:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.364 06:47:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:44.364 spdk_app_start Round 1 00:04:44.364 06:47:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2895353 /var/tmp/spdk-nbd.sock 00:04:44.364 06:47:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2895353 ']' 00:04:44.364 06:47:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.364 06:47:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.364 06:47:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.364 06:47:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.364 06:47:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.364 06:47:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.364 06:47:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:44.364 06:47:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.364 Malloc0 00:04:44.364 06:47:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.626 Malloc1 00:04:44.626 06:47:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.626 06:47:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.887 /dev/nbd0 00:04:44.887 06:47:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.887 06:47:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.887 1+0 records in 00:04:44.887 1+0 records out 00:04:44.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270514 s, 15.1 MB/s 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:44.887 06:47:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:44.887 06:47:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.887 06:47:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.887 06:47:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.887 /dev/nbd1 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.148 1+0 records in 00:04:45.148 1+0 records out 00:04:45.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027901 s, 14.7 MB/s 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:45.148 06:47:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.148 { 00:04:45.148 "nbd_device": "/dev/nbd0", 00:04:45.148 "bdev_name": "Malloc0" 00:04:45.148 }, 00:04:45.148 { 00:04:45.148 "nbd_device": "/dev/nbd1", 00:04:45.148 "bdev_name": "Malloc1" 00:04:45.148 } 00:04:45.148 ]' 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.148 { 00:04:45.148 "nbd_device": "/dev/nbd0", 00:04:45.148 "bdev_name": "Malloc0" 00:04:45.148 }, 00:04:45.148 { 00:04:45.148 "nbd_device": "/dev/nbd1", 00:04:45.148 "bdev_name": "Malloc1" 00:04:45.148 } 00:04:45.148 ]' 00:04:45.148 06:47:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.409 /dev/nbd1' 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.409 /dev/nbd1' 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.409 256+0 records in 00:04:45.409 256+0 records out 00:04:45.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127531 s, 82.2 MB/s 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.409 256+0 records in 00:04:45.409 256+0 records out 00:04:45.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126553 s, 82.9 MB/s 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.409 256+0 records in 00:04:45.409 256+0 records out 00:04:45.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127584 s, 82.2 MB/s 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.409 06:47:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.670 06:47:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.670 06:47:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.930 06:47:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.930 06:47:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.191 06:47:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.191 [2024-10-16 06:47:45.633908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.191 [2024-10-16 06:47:45.663200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.191 [2024-10-16 06:47:45.663200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.452 [2024-10-16 06:47:45.692489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.452 [2024-10-16 06:47:45.692519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.750 06:47:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.750 06:47:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:49.750 spdk_app_start Round 2 00:04:49.750 06:47:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2895353 /var/tmp/spdk-nbd.sock 00:04:49.750 06:47:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2895353 ']' 00:04:49.750 06:47:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.750 06:47:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.750 06:47:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.750 06:47:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.750 06:47:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.750 06:47:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.750 06:47:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:49.750 06:47:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.750 Malloc0 00:04:49.750 06:47:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.750 Malloc1 00:04:49.750 06:47:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.750 06:47:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.011 /dev/nbd0 00:04:50.011 06:47:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.011 06:47:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.011 1+0 records in 00:04:50.011 1+0 records out 00:04:50.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276914 s, 14.8 MB/s 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:50.011 06:47:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:50.011 06:47:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.011 06:47:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.011 06:47:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.272 /dev/nbd1 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.272 1+0 records in 00:04:50.272 1+0 records out 00:04:50.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024268 s, 16.9 MB/s 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:50.272 06:47:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.272 { 00:04:50.272 "nbd_device": "/dev/nbd0", 00:04:50.272 "bdev_name": "Malloc0" 00:04:50.272 }, 00:04:50.272 { 00:04:50.272 "nbd_device": "/dev/nbd1", 00:04:50.272 "bdev_name": "Malloc1" 00:04:50.272 } 00:04:50.272 ]' 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.272 { 00:04:50.272 "nbd_device": "/dev/nbd0", 00:04:50.272 "bdev_name": "Malloc0" 00:04:50.272 }, 00:04:50.272 { 00:04:50.272 "nbd_device": "/dev/nbd1", 00:04:50.272 "bdev_name": "Malloc1" 00:04:50.272 } 00:04:50.272 ]' 00:04:50.272 06:47:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.533 /dev/nbd1' 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.533 /dev/nbd1' 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.533 256+0 records in 00:04:50.533 256+0 records out 00:04:50.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127178 s, 82.4 MB/s 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.533 256+0 records in 00:04:50.533 256+0 records out 00:04:50.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120928 s, 86.7 MB/s 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.533 256+0 records in 00:04:50.533 256+0 records out 00:04:50.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127453 s, 82.3 MB/s 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.533 06:47:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.793 06:47:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.055 06:47:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.055 06:47:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.314 06:47:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.314 [2024-10-16 06:47:50.784912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.314 [2024-10-16 06:47:50.815096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.314 [2024-10-16 06:47:50.815097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.575 [2024-10-16 06:47:50.844258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:51.575 [2024-10-16 06:47:50.844292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.875 06:47:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2895353 /var/tmp/spdk-nbd.sock 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2895353 ']' 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:54.875 06:47:53 event.app_repeat -- event/event.sh@39 -- # killprocess 2895353 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2895353 ']' 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2895353 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2895353 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2895353' 00:04:54.875 killing process with pid 2895353 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2895353 00:04:54.875 06:47:53 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2895353 00:04:54.875 spdk_app_start is called in Round 0. 00:04:54.875 Shutdown signal received, stop current app iteration 00:04:54.875 Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 reinitialization... 00:04:54.875 spdk_app_start is called in Round 1. 00:04:54.875 Shutdown signal received, stop current app iteration 00:04:54.875 Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 reinitialization... 00:04:54.875 spdk_app_start is called in Round 2. 00:04:54.875 Shutdown signal received, stop current app iteration 00:04:54.875 Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 reinitialization... 00:04:54.875 spdk_app_start is called in Round 3. 00:04:54.875 Shutdown signal received, stop current app iteration 00:04:54.875 06:47:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:54.875 06:47:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:54.875 00:04:54.875 real 0m15.819s 00:04:54.875 user 0m34.753s 00:04:54.875 sys 0m2.309s 00:04:54.875 06:47:54 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.875 06:47:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.875 ************************************ 00:04:54.875 END TEST app_repeat 00:04:54.875 ************************************ 00:04:54.875 06:47:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:54.875 06:47:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:54.875 06:47:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.875 06:47:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.875 06:47:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.875 ************************************ 00:04:54.875 START TEST cpu_locks 00:04:54.875 ************************************ 00:04:54.875 06:47:54 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:54.875 * Looking for test storage... 00:04:54.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:54.875 06:47:54 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:54.875 06:47:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.875 06:47:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.875 06:47:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.875 06:47:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:54.876 06:47:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.876 06:47:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.876 --rc genhtml_branch_coverage=1 00:04:54.876 --rc genhtml_function_coverage=1 00:04:54.876 --rc genhtml_legend=1 00:04:54.876 --rc geninfo_all_blocks=1 00:04:54.876 --rc geninfo_unexecuted_blocks=1 00:04:54.876 00:04:54.876 ' 00:04:54.876 06:47:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.876 --rc genhtml_branch_coverage=1 00:04:54.876 --rc genhtml_function_coverage=1 00:04:54.876 --rc genhtml_legend=1 00:04:54.876 --rc geninfo_all_blocks=1 00:04:54.876 --rc geninfo_unexecuted_blocks=1 00:04:54.876 00:04:54.876 ' 00:04:54.876 06:47:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.876 --rc genhtml_branch_coverage=1 00:04:54.876 --rc genhtml_function_coverage=1 00:04:54.876 --rc genhtml_legend=1 00:04:54.876 --rc geninfo_all_blocks=1 00:04:54.876 --rc geninfo_unexecuted_blocks=1 00:04:54.876 00:04:54.876 ' 00:04:54.876 06:47:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.876 --rc genhtml_branch_coverage=1 00:04:54.876 --rc genhtml_function_coverage=1 00:04:54.876 --rc genhtml_legend=1 00:04:54.876 --rc geninfo_all_blocks=1 00:04:54.876 --rc geninfo_unexecuted_blocks=1 00:04:54.876 00:04:54.876 ' 00:04:54.876 06:47:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:54.876 06:47:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:54.876 06:47:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:54.876 06:47:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:54.876 06:47:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.876 06:47:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.876 06:47:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.876 ************************************ 00:04:54.876 START TEST default_locks 00:04:54.876 ************************************ 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2898664 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2898664 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2898664 ']' 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.876 06:47:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.137 [2024-10-16 06:47:54.416612] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:55.137 [2024-10-16 06:47:54.416675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898664 ] 00:04:55.137 [2024-10-16 06:47:54.499766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.137 [2024-10-16 06:47:54.541442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.078 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.078 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:56.078 06:47:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2898664 00:04:56.078 06:47:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2898664 00:04:56.078 06:47:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.339 lslocks: write error 00:04:56.339 06:47:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2898664 00:04:56.339 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2898664 ']' 00:04:56.339 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2898664 00:04:56.339 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:56.600 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.600 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2898664 00:04:56.600 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.600 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.600 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2898664' 00:04:56.600 killing process with pid 2898664 00:04:56.600 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2898664 00:04:56.600 06:47:55 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2898664 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2898664 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2898664 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2898664 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2898664 ']' 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2898664) - No such process 00:04:56.600 ERROR: process (pid: 2898664) is no longer running 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:56.600 00:04:56.600 real 0m1.737s 00:04:56.600 user 0m1.888s 00:04:56.600 sys 0m0.580s 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.600 06:47:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.600 ************************************ 00:04:56.600 END TEST default_locks 00:04:56.600 ************************************ 00:04:56.861 06:47:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:56.861 06:47:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.861 06:47:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.861 06:47:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.861 ************************************ 00:04:56.861 START TEST default_locks_via_rpc 00:04:56.861 ************************************ 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2899093 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2899093 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2899093 ']' 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.861 06:47:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.861 [2024-10-16 06:47:56.229437] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:56.861 [2024-10-16 06:47:56.229494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899093 ] 00:04:56.861 [2024-10-16 06:47:56.307056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.861 [2024-10-16 06:47:56.343361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2899093 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2899093 00:04:57.802 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.062 06:47:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2899093 00:04:58.062 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2899093 ']' 00:04:58.062 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2899093 00:04:58.062 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:58.062 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.062 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2899093 00:04:58.322 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.322 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.322 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2899093' 00:04:58.322 killing process with pid 2899093 00:04:58.322 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2899093 00:04:58.322 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2899093 00:04:58.322 00:04:58.322 real 0m1.623s 00:04:58.322 user 0m1.730s 00:04:58.322 sys 0m0.565s 00:04:58.322 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.322 06:47:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.322 ************************************ 00:04:58.322 END TEST default_locks_via_rpc 00:04:58.322 ************************************ 00:04:58.582 06:47:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:58.582 06:47:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.582 06:47:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.582 06:47:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.582 ************************************ 00:04:58.582 START TEST non_locking_app_on_locked_coremask 00:04:58.582 ************************************ 00:04:58.582 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:58.582 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2899455 00:04:58.582 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2899455 /var/tmp/spdk.sock 00:04:58.582 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.582 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2899455 ']' 00:04:58.582 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.582 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.582 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.582 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.583 06:47:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.583 [2024-10-16 06:47:57.925686] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:58.583 [2024-10-16 06:47:57.925745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899455 ] 00:04:58.583 [2024-10-16 06:47:58.003154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.583 [2024-10-16 06:47:58.036110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2899684 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2899684 /var/tmp/spdk2.sock 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2899684 ']' 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.522 06:47:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.522 [2024-10-16 06:47:58.768353] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:04:59.522 [2024-10-16 06:47:58.768407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899684 ] 00:04:59.522 [2024-10-16 06:47:58.838374] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.522 [2024-10-16 06:47:58.838395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.522 [2024-10-16 06:47:58.901143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.092 06:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.093 06:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:00.093 06:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2899455 00:05:00.093 06:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2899455 00:05:00.093 06:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.732 lslocks: write error 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2899455 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2899455 ']' 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2899455 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2899455 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2899455' 00:05:00.732 killing process with pid 2899455 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2899455 00:05:00.732 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2899455 00:05:00.999 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2899684 00:05:00.999 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2899684 ']' 00:05:00.999 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2899684 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2899684 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2899684' 00:05:01.261 killing process with pid 2899684 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2899684 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2899684 00:05:01.261 00:05:01.261 real 0m2.880s 00:05:01.261 user 0m3.187s 00:05:01.261 sys 0m0.893s 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.261 06:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.261 ************************************ 00:05:01.261 END TEST non_locking_app_on_locked_coremask 00:05:01.261 ************************************ 00:05:01.522 06:48:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:01.522 06:48:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.522 06:48:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.522 06:48:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.522 ************************************ 00:05:01.522 START TEST locking_app_on_unlocked_coremask 00:05:01.522 ************************************ 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2900106 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2900106 /var/tmp/spdk.sock 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2900106 ']' 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.522 06:48:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.522 [2024-10-16 06:48:00.885445] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:01.522 [2024-10-16 06:48:00.885502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900106 ] 00:05:01.522 [2024-10-16 06:48:00.962295] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.522 [2024-10-16 06:48:00.962321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.522 [2024-10-16 06:48:00.995728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.464 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.464 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:02.464 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2900469 00:05:02.464 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2900469 /var/tmp/spdk2.sock 00:05:02.464 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2900469 ']' 00:05:02.464 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:02.465 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.465 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.465 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.465 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.465 06:48:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.465 [2024-10-16 06:48:01.726785] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:02.465 [2024-10-16 06:48:01.726848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900469 ] 00:05:02.465 [2024-10-16 06:48:01.797562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.465 [2024-10-16 06:48:01.859863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.036 06:48:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.036 06:48:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:03.036 06:48:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2900469 00:05:03.036 06:48:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2900469 00:05:03.036 06:48:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.608 lslocks: write error 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2900106 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2900106 ']' 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2900106 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2900106 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2900106' 00:05:03.608 killing process with pid 2900106 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2900106 00:05:03.608 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2900106 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2900469 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2900469 ']' 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2900469 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2900469 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2900469' 00:05:04.180 killing process with pid 2900469 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2900469 00:05:04.180 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2900469 00:05:04.440 00:05:04.441 real 0m2.864s 00:05:04.441 user 0m3.182s 00:05:04.441 sys 0m0.866s 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.441 ************************************ 00:05:04.441 END TEST locking_app_on_unlocked_coremask 00:05:04.441 ************************************ 00:05:04.441 06:48:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:04.441 06:48:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.441 06:48:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.441 06:48:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.441 ************************************ 00:05:04.441 START TEST locking_app_on_locked_coremask 00:05:04.441 ************************************ 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2900885 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2900885 /var/tmp/spdk.sock 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2900885 ']' 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.441 06:48:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.441 [2024-10-16 06:48:03.827372] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:04.441 [2024-10-16 06:48:03.827428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900885 ] 00:05:04.441 [2024-10-16 06:48:03.904953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.441 [2024-10-16 06:48:03.939158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2900968 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2900968 /var/tmp/spdk2.sock 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2900968 /var/tmp/spdk2.sock 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:05.383 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.384 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2900968 /var/tmp/spdk2.sock 00:05:05.384 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2900968 ']' 00:05:05.384 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.384 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.384 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.384 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.384 06:48:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.384 [2024-10-16 06:48:04.666288] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:05.384 [2024-10-16 06:48:04.666343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900968 ] 00:05:05.384 [2024-10-16 06:48:04.737441] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2900885 has claimed it. 00:05:05.384 [2024-10-16 06:48:04.737472] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:05.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2900968) - No such process 00:05:05.955 ERROR: process (pid: 2900968) is no longer running 00:05:05.955 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.955 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:05.955 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:05.955 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.955 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.955 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.955 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2900885 00:05:05.955 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2900885 00:05:05.955 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.526 lslocks: write error 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2900885 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2900885 ']' 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2900885 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2900885 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2900885' 00:05:06.526 killing process with pid 2900885 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2900885 00:05:06.526 06:48:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2900885 00:05:06.787 00:05:06.787 real 0m2.329s 00:05:06.787 user 0m2.601s 00:05:06.787 sys 0m0.660s 00:05:06.787 06:48:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.787 06:48:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.787 ************************************ 00:05:06.787 END TEST locking_app_on_locked_coremask 00:05:06.787 ************************************ 00:05:06.787 06:48:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:06.787 06:48:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.787 06:48:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.787 06:48:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.787 ************************************ 00:05:06.787 START TEST locking_overlapped_coremask 00:05:06.787 ************************************ 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2901280 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2901280 /var/tmp/spdk.sock 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2901280 ']' 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.787 06:48:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.787 [2024-10-16 06:48:06.237130] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:06.787 [2024-10-16 06:48:06.237189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901280 ] 00:05:07.048 [2024-10-16 06:48:06.320085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:07.048 [2024-10-16 06:48:06.356908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.048 [2024-10-16 06:48:06.357307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.048 [2024-10-16 06:48:06.357309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2901595 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2901595 /var/tmp/spdk2.sock 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2901595 /var/tmp/spdk2.sock 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2901595 /var/tmp/spdk2.sock 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2901595 ']' 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.620 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.620 [2024-10-16 06:48:07.095522] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:07.620 [2024-10-16 06:48:07.095576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901595 ] 00:05:07.886 [2024-10-16 06:48:07.183672] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2901280 has claimed it. 00:05:07.886 [2024-10-16 06:48:07.183711] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2901595) - No such process 00:05:08.458 ERROR: process (pid: 2901595) is no longer running 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2901280 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2901280 ']' 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2901280 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2901280 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.458 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.459 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2901280' 00:05:08.459 killing process with pid 2901280 00:05:08.459 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2901280 00:05:08.459 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2901280 00:05:08.719 00:05:08.719 real 0m1.787s 00:05:08.719 user 0m5.173s 00:05:08.719 sys 0m0.382s 00:05:08.719 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.719 06:48:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.719 ************************************ 00:05:08.719 END TEST locking_overlapped_coremask 00:05:08.719 ************************************ 00:05:08.719 06:48:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:08.719 06:48:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.719 06:48:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.719 06:48:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.719 ************************************ 00:05:08.719 START TEST locking_overlapped_coremask_via_rpc 00:05:08.719 ************************************ 00:05:08.719 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:08.719 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2901718 00:05:08.719 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2901718 /var/tmp/spdk.sock 00:05:08.720 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:08.720 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2901718 ']' 00:05:08.720 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.720 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.720 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.720 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.720 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.720 [2024-10-16 06:48:08.103909] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:08.720 [2024-10-16 06:48:08.103969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901718 ] 00:05:08.720 [2024-10-16 06:48:08.182745] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.720 [2024-10-16 06:48:08.182777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:08.980 [2024-10-16 06:48:08.222454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.980 [2024-10-16 06:48:08.222607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.981 [2024-10-16 06:48:08.222609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2901974 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2901974 /var/tmp/spdk2.sock 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2901974 ']' 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.551 06:48:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.551 [2024-10-16 06:48:08.963337] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:09.551 [2024-10-16 06:48:08.963393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901974 ] 00:05:09.551 [2024-10-16 06:48:09.038578] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.551 [2024-10-16 06:48:09.038601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.812 [2024-10-16 06:48:09.097754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.812 [2024-10-16 06:48:09.100993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.812 [2024-10-16 06:48:09.100995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.383 [2024-10-16 06:48:09.761925] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2901718 has claimed it. 00:05:10.383 request: 00:05:10.383 { 00:05:10.383 "method": "framework_enable_cpumask_locks", 00:05:10.383 "req_id": 1 00:05:10.383 } 00:05:10.383 Got JSON-RPC error response 00:05:10.383 response: 00:05:10.383 { 00:05:10.383 "code": -32603, 00:05:10.383 "message": "Failed to claim CPU core: 2" 00:05:10.383 } 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2901718 /var/tmp/spdk.sock 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2901718 ']' 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.383 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.644 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.644 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:10.644 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2901974 /var/tmp/spdk2.sock 00:05:10.644 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2901974 ']' 00:05:10.644 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.644 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.644 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.644 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.644 06:48:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.644 06:48:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.644 06:48:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:10.644 06:48:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:10.644 06:48:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.644 06:48:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.644 06:48:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.644 00:05:10.644 real 0m2.102s 00:05:10.644 user 0m0.876s 00:05:10.644 sys 0m0.152s 00:05:10.644 06:48:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.644 06:48:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.644 ************************************ 00:05:10.644 END TEST locking_overlapped_coremask_via_rpc 00:05:10.644 ************************************ 00:05:10.905 06:48:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:10.905 06:48:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2901718 ]] 00:05:10.905 06:48:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2901718 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2901718 ']' 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2901718 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2901718 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2901718' 00:05:10.905 killing process with pid 2901718 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2901718 00:05:10.905 06:48:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2901718 00:05:11.166 06:48:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2901974 ]] 00:05:11.166 06:48:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2901974 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2901974 ']' 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2901974 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2901974 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2901974' 00:05:11.166 killing process with pid 2901974 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2901974 00:05:11.166 06:48:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2901974 00:05:11.428 06:48:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:11.428 06:48:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:11.428 06:48:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2901718 ]] 00:05:11.428 06:48:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2901718 00:05:11.428 06:48:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2901718 ']' 00:05:11.428 06:48:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2901718 00:05:11.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2901718) - No such process 00:05:11.428 06:48:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2901718 is not found' 00:05:11.428 Process with pid 2901718 is not found 00:05:11.428 06:48:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2901974 ]] 00:05:11.428 06:48:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2901974 00:05:11.428 06:48:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2901974 ']' 00:05:11.428 06:48:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2901974 00:05:11.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2901974) - No such process 00:05:11.428 06:48:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2901974 is not found' 00:05:11.428 Process with pid 2901974 is not found 00:05:11.428 06:48:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:11.428 00:05:11.428 real 0m16.583s 00:05:11.428 user 0m28.816s 00:05:11.428 sys 0m5.033s 00:05:11.428 06:48:10 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.428 06:48:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.428 ************************************ 00:05:11.428 END TEST cpu_locks 00:05:11.428 ************************************ 00:05:11.428 00:05:11.428 real 0m42.492s 00:05:11.428 user 1m23.944s 00:05:11.428 sys 0m8.450s 00:05:11.429 06:48:10 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.429 06:48:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.429 ************************************ 00:05:11.429 END TEST event 00:05:11.429 ************************************ 00:05:11.429 06:48:10 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:11.429 06:48:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.429 06:48:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.429 06:48:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.429 ************************************ 00:05:11.429 START TEST thread 00:05:11.429 ************************************ 00:05:11.429 06:48:10 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:11.429 * Looking for test storage... 00:05:11.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:11.429 06:48:10 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:11.429 06:48:10 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:11.429 06:48:10 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.689 06:48:10 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.689 06:48:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.690 06:48:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.690 06:48:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.690 06:48:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.690 06:48:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.690 06:48:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.690 06:48:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.690 06:48:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.690 06:48:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.690 06:48:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.690 06:48:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.690 06:48:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:11.690 06:48:11 thread -- scripts/common.sh@345 -- # : 1 00:05:11.690 06:48:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.690 06:48:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.690 06:48:11 thread -- scripts/common.sh@365 -- # decimal 1 00:05:11.690 06:48:11 thread -- scripts/common.sh@353 -- # local d=1 00:05:11.690 06:48:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.690 06:48:11 thread -- scripts/common.sh@355 -- # echo 1 00:05:11.690 06:48:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.690 06:48:11 thread -- scripts/common.sh@366 -- # decimal 2 00:05:11.690 06:48:11 thread -- scripts/common.sh@353 -- # local d=2 00:05:11.690 06:48:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.690 06:48:11 thread -- scripts/common.sh@355 -- # echo 2 00:05:11.690 06:48:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.690 06:48:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.690 06:48:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.690 06:48:11 thread -- scripts/common.sh@368 -- # return 0 00:05:11.690 06:48:11 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.690 06:48:11 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.690 --rc genhtml_branch_coverage=1 00:05:11.690 --rc genhtml_function_coverage=1 00:05:11.690 --rc genhtml_legend=1 00:05:11.690 --rc geninfo_all_blocks=1 00:05:11.690 --rc geninfo_unexecuted_blocks=1 00:05:11.690 00:05:11.690 ' 00:05:11.690 06:48:11 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.690 --rc genhtml_branch_coverage=1 00:05:11.690 --rc genhtml_function_coverage=1 00:05:11.690 --rc genhtml_legend=1 00:05:11.690 --rc geninfo_all_blocks=1 00:05:11.690 --rc geninfo_unexecuted_blocks=1 00:05:11.690 00:05:11.690 ' 00:05:11.690 06:48:11 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.690 --rc genhtml_branch_coverage=1 00:05:11.690 --rc genhtml_function_coverage=1 00:05:11.690 --rc genhtml_legend=1 00:05:11.690 --rc geninfo_all_blocks=1 00:05:11.690 --rc geninfo_unexecuted_blocks=1 00:05:11.690 00:05:11.690 ' 00:05:11.690 06:48:11 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.690 --rc genhtml_branch_coverage=1 00:05:11.690 --rc genhtml_function_coverage=1 00:05:11.690 --rc genhtml_legend=1 00:05:11.690 --rc geninfo_all_blocks=1 00:05:11.690 --rc geninfo_unexecuted_blocks=1 00:05:11.690 00:05:11.690 ' 00:05:11.690 06:48:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:11.690 06:48:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:11.690 06:48:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.690 06:48:11 thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.690 ************************************ 00:05:11.690 START TEST thread_poller_perf 00:05:11.690 ************************************ 00:05:11.690 06:48:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:11.690 [2024-10-16 06:48:11.078634] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:11.690 [2024-10-16 06:48:11.078721] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902807 ] 00:05:11.690 [2024-10-16 06:48:11.159367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.951 [2024-10-16 06:48:11.190970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.951 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:12.895 [2024-10-16T04:48:12.394Z] ====================================== 00:05:12.895 [2024-10-16T04:48:12.394Z] busy:2407703750 (cyc) 00:05:12.895 [2024-10-16T04:48:12.394Z] total_run_count: 418000 00:05:12.895 [2024-10-16T04:48:12.394Z] tsc_hz: 2400000000 (cyc) 00:05:12.895 [2024-10-16T04:48:12.394Z] ====================================== 00:05:12.895 [2024-10-16T04:48:12.394Z] poller_cost: 5760 (cyc), 2400 (nsec) 00:05:12.895 00:05:12.895 real 0m1.169s 00:05:12.895 user 0m1.083s 00:05:12.895 sys 0m0.081s 00:05:12.895 06:48:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.895 06:48:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.895 ************************************ 00:05:12.895 END TEST thread_poller_perf 00:05:12.895 ************************************ 00:05:12.895 06:48:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:12.895 06:48:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:12.895 06:48:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.895 06:48:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.895 ************************************ 00:05:12.895 START TEST thread_poller_perf 00:05:12.895 ************************************ 00:05:12.895 06:48:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:12.895 [2024-10-16 06:48:12.322888] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:12.895 [2024-10-16 06:48:12.322992] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903229 ] 00:05:13.156 [2024-10-16 06:48:12.401997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.156 [2024-10-16 06:48:12.432583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.156 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:14.098 [2024-10-16T04:48:13.597Z] ====================================== 00:05:14.098 [2024-10-16T04:48:13.597Z] busy:2401356360 (cyc) 00:05:14.098 [2024-10-16T04:48:13.597Z] total_run_count: 5559000 00:05:14.098 [2024-10-16T04:48:13.597Z] tsc_hz: 2400000000 (cyc) 00:05:14.098 [2024-10-16T04:48:13.597Z] ====================================== 00:05:14.098 [2024-10-16T04:48:13.597Z] poller_cost: 431 (cyc), 179 (nsec) 00:05:14.098 00:05:14.098 real 0m1.158s 00:05:14.098 user 0m1.081s 00:05:14.098 sys 0m0.073s 00:05:14.098 06:48:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.098 06:48:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.098 ************************************ 00:05:14.098 END TEST thread_poller_perf 00:05:14.098 ************************************ 00:05:14.098 06:48:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:14.098 00:05:14.098 real 0m2.677s 00:05:14.098 user 0m2.347s 00:05:14.098 sys 0m0.342s 00:05:14.098 06:48:13 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.098 06:48:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.098 ************************************ 00:05:14.098 END TEST thread 00:05:14.098 ************************************ 00:05:14.098 06:48:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:14.098 06:48:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:14.098 06:48:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.098 06:48:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.098 06:48:13 -- common/autotest_common.sh@10 -- # set +x 00:05:14.098 ************************************ 00:05:14.098 START TEST app_cmdline 00:05:14.098 ************************************ 00:05:14.098 06:48:13 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:14.359 * Looking for test storage... 00:05:14.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.359 06:48:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.359 --rc genhtml_branch_coverage=1 00:05:14.359 --rc genhtml_function_coverage=1 00:05:14.359 --rc genhtml_legend=1 00:05:14.359 --rc geninfo_all_blocks=1 00:05:14.359 --rc geninfo_unexecuted_blocks=1 00:05:14.359 00:05:14.359 ' 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.359 --rc genhtml_branch_coverage=1 00:05:14.359 --rc genhtml_function_coverage=1 00:05:14.359 --rc genhtml_legend=1 00:05:14.359 --rc geninfo_all_blocks=1 00:05:14.359 --rc geninfo_unexecuted_blocks=1 00:05:14.359 00:05:14.359 ' 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.359 --rc genhtml_branch_coverage=1 00:05:14.359 --rc genhtml_function_coverage=1 00:05:14.359 --rc genhtml_legend=1 00:05:14.359 --rc geninfo_all_blocks=1 00:05:14.359 --rc geninfo_unexecuted_blocks=1 00:05:14.359 00:05:14.359 ' 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.359 --rc genhtml_branch_coverage=1 00:05:14.359 --rc genhtml_function_coverage=1 00:05:14.359 --rc genhtml_legend=1 00:05:14.359 --rc geninfo_all_blocks=1 00:05:14.359 --rc geninfo_unexecuted_blocks=1 00:05:14.359 00:05:14.359 ' 00:05:14.359 06:48:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:14.359 06:48:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2903637 00:05:14.359 06:48:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2903637 00:05:14.359 06:48:13 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2903637 ']' 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.359 06:48:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:14.359 [2024-10-16 06:48:13.841365] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:14.359 [2024-10-16 06:48:13.841433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903637 ] 00:05:14.619 [2024-10-16 06:48:13.915904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.619 [2024-10-16 06:48:13.945990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.190 06:48:14 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.190 06:48:14 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:15.190 06:48:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:15.450 { 00:05:15.451 "version": "SPDK v25.01-pre git sha1 70fd76b04", 00:05:15.451 "fields": { 00:05:15.451 "major": 25, 00:05:15.451 "minor": 1, 00:05:15.451 "patch": 0, 00:05:15.451 "suffix": "-pre", 00:05:15.451 "commit": "70fd76b04" 00:05:15.451 } 00:05:15.451 } 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:15.451 06:48:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:15.451 06:48:14 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:15.712 request: 00:05:15.712 { 00:05:15.712 "method": "env_dpdk_get_mem_stats", 00:05:15.712 "req_id": 1 00:05:15.712 } 00:05:15.712 Got JSON-RPC error response 00:05:15.712 response: 00:05:15.712 { 00:05:15.712 "code": -32601, 00:05:15.712 "message": "Method not found" 00:05:15.712 } 00:05:15.712 06:48:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:15.712 06:48:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.712 06:48:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:15.712 06:48:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.712 06:48:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2903637 00:05:15.712 06:48:14 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2903637 ']' 00:05:15.712 06:48:14 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2903637 00:05:15.712 06:48:14 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:15.712 06:48:14 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.712 06:48:15 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2903637 00:05:15.712 06:48:15 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.712 06:48:15 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.712 06:48:15 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2903637' 00:05:15.712 killing process with pid 2903637 00:05:15.712 06:48:15 app_cmdline -- common/autotest_common.sh@969 -- # kill 2903637 00:05:15.712 06:48:15 app_cmdline -- common/autotest_common.sh@974 -- # wait 2903637 00:05:15.973 00:05:15.973 real 0m1.669s 00:05:15.973 user 0m1.974s 00:05:15.973 sys 0m0.458s 00:05:15.973 06:48:15 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.973 06:48:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:15.973 ************************************ 00:05:15.973 END TEST app_cmdline 00:05:15.973 ************************************ 00:05:15.973 06:48:15 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:15.973 06:48:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.973 06:48:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.973 06:48:15 -- common/autotest_common.sh@10 -- # set +x 00:05:15.973 ************************************ 00:05:15.973 START TEST version 00:05:15.973 ************************************ 00:05:15.973 06:48:15 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:15.973 * Looking for test storage... 00:05:15.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:15.973 06:48:15 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.973 06:48:15 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.973 06:48:15 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.234 06:48:15 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.234 06:48:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.234 06:48:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.234 06:48:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.234 06:48:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.234 06:48:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.234 06:48:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.234 06:48:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.234 06:48:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.234 06:48:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.234 06:48:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.234 06:48:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.234 06:48:15 version -- scripts/common.sh@344 -- # case "$op" in 00:05:16.234 06:48:15 version -- scripts/common.sh@345 -- # : 1 00:05:16.234 06:48:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.234 06:48:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.234 06:48:15 version -- scripts/common.sh@365 -- # decimal 1 00:05:16.234 06:48:15 version -- scripts/common.sh@353 -- # local d=1 00:05:16.234 06:48:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.234 06:48:15 version -- scripts/common.sh@355 -- # echo 1 00:05:16.234 06:48:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.234 06:48:15 version -- scripts/common.sh@366 -- # decimal 2 00:05:16.234 06:48:15 version -- scripts/common.sh@353 -- # local d=2 00:05:16.234 06:48:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.234 06:48:15 version -- scripts/common.sh@355 -- # echo 2 00:05:16.234 06:48:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.234 06:48:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.234 06:48:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.234 06:48:15 version -- scripts/common.sh@368 -- # return 0 00:05:16.234 06:48:15 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.234 06:48:15 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.234 --rc genhtml_branch_coverage=1 00:05:16.234 --rc genhtml_function_coverage=1 00:05:16.234 --rc genhtml_legend=1 00:05:16.234 --rc geninfo_all_blocks=1 00:05:16.234 --rc geninfo_unexecuted_blocks=1 00:05:16.234 00:05:16.234 ' 00:05:16.234 06:48:15 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.234 --rc genhtml_branch_coverage=1 00:05:16.234 --rc genhtml_function_coverage=1 00:05:16.234 --rc genhtml_legend=1 00:05:16.234 --rc geninfo_all_blocks=1 00:05:16.234 --rc geninfo_unexecuted_blocks=1 00:05:16.234 00:05:16.234 ' 00:05:16.234 06:48:15 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.234 --rc genhtml_branch_coverage=1 00:05:16.234 --rc genhtml_function_coverage=1 00:05:16.234 --rc genhtml_legend=1 00:05:16.234 --rc geninfo_all_blocks=1 00:05:16.234 --rc geninfo_unexecuted_blocks=1 00:05:16.234 00:05:16.234 ' 00:05:16.234 06:48:15 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.234 --rc genhtml_branch_coverage=1 00:05:16.234 --rc genhtml_function_coverage=1 00:05:16.234 --rc genhtml_legend=1 00:05:16.234 --rc geninfo_all_blocks=1 00:05:16.234 --rc geninfo_unexecuted_blocks=1 00:05:16.234 00:05:16.234 ' 00:05:16.234 06:48:15 version -- app/version.sh@17 -- # get_header_version major 00:05:16.234 06:48:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.234 06:48:15 version -- app/version.sh@14 -- # cut -f2 00:05:16.234 06:48:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.234 06:48:15 version -- app/version.sh@17 -- # major=25 00:05:16.234 06:48:15 version -- app/version.sh@18 -- # get_header_version minor 00:05:16.234 06:48:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.234 06:48:15 version -- app/version.sh@14 -- # cut -f2 00:05:16.234 06:48:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.234 06:48:15 version -- app/version.sh@18 -- # minor=1 00:05:16.234 06:48:15 version -- app/version.sh@19 -- # get_header_version patch 00:05:16.234 06:48:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.234 06:48:15 version -- app/version.sh@14 -- # cut -f2 00:05:16.234 06:48:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.234 06:48:15 version -- app/version.sh@19 -- # patch=0 00:05:16.234 06:48:15 version -- app/version.sh@20 -- # get_header_version suffix 00:05:16.234 06:48:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.234 06:48:15 version -- app/version.sh@14 -- # cut -f2 00:05:16.234 06:48:15 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.234 06:48:15 version -- app/version.sh@20 -- # suffix=-pre 00:05:16.234 06:48:15 version -- app/version.sh@22 -- # version=25.1 00:05:16.234 06:48:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:16.234 06:48:15 version -- app/version.sh@28 -- # version=25.1rc0 00:05:16.234 06:48:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:16.234 06:48:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:16.234 06:48:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:16.234 06:48:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:16.234 00:05:16.234 real 0m0.277s 00:05:16.234 user 0m0.153s 00:05:16.234 sys 0m0.169s 00:05:16.234 06:48:15 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.234 06:48:15 version -- common/autotest_common.sh@10 -- # set +x 00:05:16.234 ************************************ 00:05:16.234 END TEST version 00:05:16.234 ************************************ 00:05:16.234 06:48:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:16.234 06:48:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:16.234 06:48:15 -- spdk/autotest.sh@194 -- # uname -s 00:05:16.234 06:48:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:16.234 06:48:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:16.234 06:48:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:16.234 06:48:15 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:16.234 06:48:15 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:16.234 06:48:15 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:16.234 06:48:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.234 06:48:15 -- common/autotest_common.sh@10 -- # set +x 00:05:16.234 06:48:15 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:16.234 06:48:15 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:16.234 06:48:15 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:16.234 06:48:15 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:16.234 06:48:15 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:16.235 06:48:15 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:16.235 06:48:15 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:16.235 06:48:15 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:16.235 06:48:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.235 06:48:15 -- common/autotest_common.sh@10 -- # set +x 00:05:16.235 ************************************ 00:05:16.235 START TEST nvmf_tcp 00:05:16.235 ************************************ 00:05:16.235 06:48:15 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:16.496 * Looking for test storage... 00:05:16.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.496 06:48:15 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.496 --rc genhtml_branch_coverage=1 00:05:16.496 --rc genhtml_function_coverage=1 00:05:16.496 --rc genhtml_legend=1 00:05:16.496 --rc geninfo_all_blocks=1 00:05:16.496 --rc geninfo_unexecuted_blocks=1 00:05:16.496 00:05:16.496 ' 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.496 --rc genhtml_branch_coverage=1 00:05:16.496 --rc genhtml_function_coverage=1 00:05:16.496 --rc genhtml_legend=1 00:05:16.496 --rc geninfo_all_blocks=1 00:05:16.496 --rc geninfo_unexecuted_blocks=1 00:05:16.496 00:05:16.496 ' 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.496 --rc genhtml_branch_coverage=1 00:05:16.496 --rc genhtml_function_coverage=1 00:05:16.496 --rc genhtml_legend=1 00:05:16.496 --rc geninfo_all_blocks=1 00:05:16.496 --rc geninfo_unexecuted_blocks=1 00:05:16.496 00:05:16.496 ' 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.496 --rc genhtml_branch_coverage=1 00:05:16.496 --rc genhtml_function_coverage=1 00:05:16.496 --rc genhtml_legend=1 00:05:16.496 --rc geninfo_all_blocks=1 00:05:16.496 --rc geninfo_unexecuted_blocks=1 00:05:16.496 00:05:16.496 ' 00:05:16.496 06:48:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:16.496 06:48:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:16.496 06:48:15 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.496 06:48:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.496 ************************************ 00:05:16.496 START TEST nvmf_target_core 00:05:16.496 ************************************ 00:05:16.496 06:48:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:16.759 * Looking for test storage... 00:05:16.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.759 --rc genhtml_branch_coverage=1 00:05:16.759 --rc genhtml_function_coverage=1 00:05:16.759 --rc genhtml_legend=1 00:05:16.759 --rc geninfo_all_blocks=1 00:05:16.759 --rc geninfo_unexecuted_blocks=1 00:05:16.759 00:05:16.759 ' 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.759 --rc genhtml_branch_coverage=1 00:05:16.759 --rc genhtml_function_coverage=1 00:05:16.759 --rc genhtml_legend=1 00:05:16.759 --rc geninfo_all_blocks=1 00:05:16.759 --rc geninfo_unexecuted_blocks=1 00:05:16.759 00:05:16.759 ' 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.759 --rc genhtml_branch_coverage=1 00:05:16.759 --rc genhtml_function_coverage=1 00:05:16.759 --rc genhtml_legend=1 00:05:16.759 --rc geninfo_all_blocks=1 00:05:16.759 --rc geninfo_unexecuted_blocks=1 00:05:16.759 00:05:16.759 ' 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.759 --rc genhtml_branch_coverage=1 00:05:16.759 --rc genhtml_function_coverage=1 00:05:16.759 --rc genhtml_legend=1 00:05:16.759 --rc geninfo_all_blocks=1 00:05:16.759 --rc geninfo_unexecuted_blocks=1 00:05:16.759 00:05:16.759 ' 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.759 06:48:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:16.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:16.760 ************************************ 00:05:16.760 START TEST nvmf_abort 00:05:16.760 ************************************ 00:05:16.760 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:17.022 * Looking for test storage... 00:05:17.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.022 --rc genhtml_branch_coverage=1 00:05:17.022 --rc genhtml_function_coverage=1 00:05:17.022 --rc genhtml_legend=1 00:05:17.022 --rc geninfo_all_blocks=1 00:05:17.022 --rc geninfo_unexecuted_blocks=1 00:05:17.022 00:05:17.022 ' 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.022 --rc genhtml_branch_coverage=1 00:05:17.022 --rc genhtml_function_coverage=1 00:05:17.022 --rc genhtml_legend=1 00:05:17.022 --rc geninfo_all_blocks=1 00:05:17.022 --rc geninfo_unexecuted_blocks=1 00:05:17.022 00:05:17.022 ' 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.022 --rc genhtml_branch_coverage=1 00:05:17.022 --rc genhtml_function_coverage=1 00:05:17.022 --rc genhtml_legend=1 00:05:17.022 --rc geninfo_all_blocks=1 00:05:17.022 --rc geninfo_unexecuted_blocks=1 00:05:17.022 00:05:17.022 ' 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.022 --rc genhtml_branch_coverage=1 00:05:17.022 --rc genhtml_function_coverage=1 00:05:17.022 --rc genhtml_legend=1 00:05:17.022 --rc geninfo_all_blocks=1 00:05:17.022 --rc geninfo_unexecuted_blocks=1 00:05:17.022 00:05:17.022 ' 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.022 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:17.023 06:48:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:25.168 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:25.168 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:25.168 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:25.168 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:25.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:25.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:05:25.168 00:05:25.168 --- 10.0.0.2 ping statistics --- 00:05:25.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:25.168 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:25.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:25.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:05:25.168 00:05:25.168 --- 10.0.0.1 ping statistics --- 00:05:25.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:25.168 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:25.168 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2908099 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2908099 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2908099 ']' 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.169 06:48:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.169 [2024-10-16 06:48:24.055514] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:25.169 [2024-10-16 06:48:24.055578] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:25.169 [2024-10-16 06:48:24.131757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.169 [2024-10-16 06:48:24.186975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:25.169 [2024-10-16 06:48:24.187030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:25.169 [2024-10-16 06:48:24.187040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.169 [2024-10-16 06:48:24.187047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.169 [2024-10-16 06:48:24.187053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:25.169 [2024-10-16 06:48:24.189201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.169 [2024-10-16 06:48:24.189361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.169 [2024-10-16 06:48:24.189362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.430 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.430 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:25.430 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:25.430 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.430 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.430 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:25.691 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:25.691 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.691 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.691 [2024-10-16 06:48:24.936488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.691 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.691 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:25.691 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.691 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.691 Malloc0 00:05:25.691 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.692 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:25.692 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.692 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.692 Delay0 00:05:25.692 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.692 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:25.692 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.692 06:48:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.692 [2024-10-16 06:48:25.025512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.692 06:48:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:25.692 [2024-10-16 06:48:25.165513] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:28.240 Initializing NVMe Controllers 00:05:28.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:28.240 controller IO queue size 128 less than required 00:05:28.240 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:28.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:28.240 Initialization complete. Launching workers. 00:05:28.240 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28055 00:05:28.240 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28116, failed to submit 62 00:05:28.240 success 28059, unsuccessful 57, failed 0 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:28.240 rmmod nvme_tcp 00:05:28.240 rmmod nvme_fabrics 00:05:28.240 rmmod nvme_keyring 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2908099 ']' 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2908099 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2908099 ']' 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2908099 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2908099 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2908099' 00:05:28.240 killing process with pid 2908099 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2908099 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2908099 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:28.240 06:48:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:30.157 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:30.157 00:05:30.157 real 0m13.402s 00:05:30.157 user 0m14.014s 00:05:30.157 sys 0m6.667s 00:05:30.157 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.157 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.157 ************************************ 00:05:30.157 END TEST nvmf_abort 00:05:30.157 ************************************ 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:30.419 ************************************ 00:05:30.419 START TEST nvmf_ns_hotplug_stress 00:05:30.419 ************************************ 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:30.419 * Looking for test storage... 00:05:30.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.419 --rc genhtml_branch_coverage=1 00:05:30.419 --rc genhtml_function_coverage=1 00:05:30.419 --rc genhtml_legend=1 00:05:30.419 --rc geninfo_all_blocks=1 00:05:30.419 --rc geninfo_unexecuted_blocks=1 00:05:30.419 00:05:30.419 ' 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.419 --rc genhtml_branch_coverage=1 00:05:30.419 --rc genhtml_function_coverage=1 00:05:30.419 --rc genhtml_legend=1 00:05:30.419 --rc geninfo_all_blocks=1 00:05:30.419 --rc geninfo_unexecuted_blocks=1 00:05:30.419 00:05:30.419 ' 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.419 --rc genhtml_branch_coverage=1 00:05:30.419 --rc genhtml_function_coverage=1 00:05:30.419 --rc genhtml_legend=1 00:05:30.419 --rc geninfo_all_blocks=1 00:05:30.419 --rc geninfo_unexecuted_blocks=1 00:05:30.419 00:05:30.419 ' 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.419 --rc genhtml_branch_coverage=1 00:05:30.419 --rc genhtml_function_coverage=1 00:05:30.419 --rc genhtml_legend=1 00:05:30.419 --rc geninfo_all_blocks=1 00:05:30.419 --rc geninfo_unexecuted_blocks=1 00:05:30.419 00:05:30.419 ' 00:05:30.419 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:30.681 06:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:38.821 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:38.821 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:38.822 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:38.822 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:38.822 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:38.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:38.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:05:38.822 00:05:38.822 --- 10.0.0.2 ping statistics --- 00:05:38.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:38.822 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:38.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:38.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:05:38.822 00:05:38.822 --- 10.0.0.1 ping statistics --- 00:05:38.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:38.822 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2912956 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2912956 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2912956 ']' 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.822 06:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:38.822 [2024-10-16 06:48:37.518318] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:05:38.822 [2024-10-16 06:48:37.518381] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:38.822 [2024-10-16 06:48:37.609503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.822 [2024-10-16 06:48:37.661444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:38.822 [2024-10-16 06:48:37.661495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:38.822 [2024-10-16 06:48:37.661504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:38.822 [2024-10-16 06:48:37.661511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:38.822 [2024-10-16 06:48:37.661517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:38.822 [2024-10-16 06:48:37.663430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.823 [2024-10-16 06:48:37.663592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.823 [2024-10-16 06:48:37.663593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.083 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.083 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:39.083 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:39.083 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.083 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.083 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:39.083 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:39.083 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:39.083 [2024-10-16 06:48:38.556508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.344 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:39.344 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:39.605 [2024-10-16 06:48:38.951754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:39.605 06:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:39.866 06:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:40.127 Malloc0 00:05:40.127 06:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:40.127 Delay0 00:05:40.127 06:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.388 06:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:40.648 NULL1 00:05:40.648 06:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:40.908 06:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2913538 00:05:40.908 06:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:40.908 06:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:40.908 06:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.849 Read completed with error (sct=0, sc=11) 00:05:42.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.109 06:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.109 06:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:42.109 06:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:42.369 true 00:05:42.369 06:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:42.369 06:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.310 06:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.310 06:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:43.310 06:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:43.571 true 00:05:43.571 06:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:43.571 06:48:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.831 06:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.831 06:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:43.831 06:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:44.092 true 00:05:44.092 06:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:44.092 06:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.476 06:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.476 06:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:45.476 06:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:45.476 true 00:05:45.476 06:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:45.476 06:48:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.417 06:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.677 06:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:46.677 06:48:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:46.677 true 00:05:46.677 06:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:46.677 06:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.938 06:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.198 06:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:47.198 06:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:47.198 true 00:05:47.198 06:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:47.198 06:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.459 06:48:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.719 06:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:47.719 06:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:47.719 true 00:05:47.979 06:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:47.979 06:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.979 06:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.240 06:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:48.240 06:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:48.501 true 00:05:48.501 06:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:48.501 06:48:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.441 06:48:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.701 06:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:49.701 06:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:49.962 true 00:05:49.962 06:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:49.962 06:48:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.903 06:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.903 06:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:50.903 06:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:51.163 true 00:05:51.163 06:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:51.163 06:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.458 06:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.458 06:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:51.458 06:48:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:51.735 true 00:05:51.735 06:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:51.735 06:48:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.968 06:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.968 06:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:52.968 06:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:53.228 true 00:05:53.228 06:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:53.228 06:48:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.169 06:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.169 06:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:54.169 06:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:54.429 true 00:05:54.429 06:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:54.429 06:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.690 06:48:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.690 06:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:54.690 06:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:54.950 true 00:05:54.950 06:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:54.950 06:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.211 06:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.211 06:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:55.211 06:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:55.471 true 00:05:55.471 06:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:55.471 06:48:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.732 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.992 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:55.992 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:55.992 true 00:05:55.992 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:55.992 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.252 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.513 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:56.513 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:56.513 true 00:05:56.513 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:56.513 06:48:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.773 06:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.034 06:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:57.034 06:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:57.034 true 00:05:57.034 06:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:57.034 06:48:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.416 06:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.416 06:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:58.416 06:48:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:58.676 true 00:05:58.676 06:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:58.676 06:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.618 06:48:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.618 06:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:59.618 06:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:59.878 true 00:05:59.878 06:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:05:59.878 06:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.139 06:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.398 06:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:00.398 06:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:00.398 true 00:06:00.398 06:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:00.398 06:48:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.657 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.917 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:00.917 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:00.917 true 00:06:00.917 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:00.917 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.177 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.437 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:01.437 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:01.437 true 00:06:01.437 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:01.437 06:49:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.818 06:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.819 06:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:02.819 06:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:03.078 true 00:06:03.078 06:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:03.078 06:49:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.020 06:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.020 06:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:04.020 06:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:04.281 true 00:06:04.281 06:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:04.281 06:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.541 06:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.541 06:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:04.541 06:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:04.802 true 00:06:04.802 06:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:04.802 06:49:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.187 06:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.187 06:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:06.187 06:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:06.187 true 00:06:06.188 06:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:06.188 06:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.140 06:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.399 06:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:07.399 06:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:07.399 true 00:06:07.399 06:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:07.399 06:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.659 06:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.919 06:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:07.919 06:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:07.919 true 00:06:07.919 06:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:07.919 06:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.301 06:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.301 06:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:09.301 06:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:09.561 true 00:06:09.561 06:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:09.561 06:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.502 06:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.502 06:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:10.502 06:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:10.761 true 00:06:10.761 06:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:10.761 06:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.020 06:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.020 Initializing NVMe Controllers 00:06:11.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:11.020 Controller IO queue size 128, less than required. 00:06:11.020 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:11.020 Controller IO queue size 128, less than required. 00:06:11.020 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:11.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:11.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:11.020 Initialization complete. Launching workers. 00:06:11.021 ======================================================== 00:06:11.021 Latency(us) 00:06:11.021 Device Information : IOPS MiB/s Average min max 00:06:11.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2125.79 1.04 36385.80 1466.66 1064691.54 00:06:11.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18352.32 8.96 6974.24 1191.97 302190.74 00:06:11.021 ======================================================== 00:06:11.021 Total : 20478.11 10.00 10027.40 1191.97 1064691.54 00:06:11.021 00:06:11.021 06:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:11.021 06:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:11.279 true 00:06:11.279 06:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2913538 00:06:11.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2913538) - No such process 00:06:11.279 06:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2913538 00:06:11.279 06:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.539 06:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.539 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:11.539 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:11.539 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:11.539 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.539 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:11.799 null0 00:06:11.799 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.799 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.799 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:12.059 null1 00:06:12.059 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.059 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.059 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:12.059 null2 00:06:12.059 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.059 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.059 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:12.319 null3 00:06:12.319 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.319 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.319 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:12.579 null4 00:06:12.580 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.580 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.580 06:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:12.840 null5 00:06:12.840 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.840 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.840 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:12.840 null6 00:06:12.840 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.840 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.840 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:13.102 null7 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.102 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2920030 2920032 2920033 2920036 2920037 2920039 2920041 2920043 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.103 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.365 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.627 06:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.627 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.627 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.627 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.627 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.627 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.627 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.890 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.152 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.413 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.675 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.675 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.675 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.675 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.675 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.675 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.675 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.675 06:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.675 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.935 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.196 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.197 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.458 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.718 06:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.718 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.977 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.237 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.497 06:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.758 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:17.019 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:17.019 rmmod nvme_tcp 00:06:17.019 rmmod nvme_fabrics 00:06:17.279 rmmod nvme_keyring 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2912956 ']' 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2912956 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2912956 ']' 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2912956 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2912956 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2912956' 00:06:17.279 killing process with pid 2912956 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2912956 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2912956 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.279 06:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.828 06:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:19.828 00:06:19.828 real 0m49.091s 00:06:19.828 user 3m13.274s 00:06:19.828 sys 0m16.295s 00:06:19.828 06:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.828 06:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.828 ************************************ 00:06:19.828 END TEST nvmf_ns_hotplug_stress 00:06:19.828 ************************************ 00:06:19.829 06:49:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:19.829 06:49:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:19.829 06:49:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.829 06:49:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.829 ************************************ 00:06:19.829 START TEST nvmf_delete_subsystem 00:06:19.829 ************************************ 00:06:19.829 06:49:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:19.829 * Looking for test storage... 00:06:19.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.829 06:49:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:19.829 06:49:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:19.829 06:49:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:19.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.829 --rc genhtml_branch_coverage=1 00:06:19.829 --rc genhtml_function_coverage=1 00:06:19.829 --rc genhtml_legend=1 00:06:19.829 --rc geninfo_all_blocks=1 00:06:19.829 --rc geninfo_unexecuted_blocks=1 00:06:19.829 00:06:19.829 ' 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:19.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.829 --rc genhtml_branch_coverage=1 00:06:19.829 --rc genhtml_function_coverage=1 00:06:19.829 --rc genhtml_legend=1 00:06:19.829 --rc geninfo_all_blocks=1 00:06:19.829 --rc geninfo_unexecuted_blocks=1 00:06:19.829 00:06:19.829 ' 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:19.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.829 --rc genhtml_branch_coverage=1 00:06:19.829 --rc genhtml_function_coverage=1 00:06:19.829 --rc genhtml_legend=1 00:06:19.829 --rc geninfo_all_blocks=1 00:06:19.829 --rc geninfo_unexecuted_blocks=1 00:06:19.829 00:06:19.829 ' 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:19.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.829 --rc genhtml_branch_coverage=1 00:06:19.829 --rc genhtml_function_coverage=1 00:06:19.829 --rc genhtml_legend=1 00:06:19.829 --rc geninfo_all_blocks=1 00:06:19.829 --rc geninfo_unexecuted_blocks=1 00:06:19.829 00:06:19.829 ' 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:19.829 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.830 06:49:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.971 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:27.972 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:27.972 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:27.972 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:27.972 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:27.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:06:27.972 00:06:27.972 --- 10.0.0.2 ping statistics --- 00:06:27.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.972 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:27.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:06:27.972 00:06:27.972 --- 10.0.0.1 ping statistics --- 00:06:27.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.972 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2925384 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2925384 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2925384 ']' 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.972 06:49:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.972 [2024-10-16 06:49:26.688993] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:06:27.972 [2024-10-16 06:49:26.689061] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.972 [2024-10-16 06:49:26.777333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.972 [2024-10-16 06:49:26.828677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.972 [2024-10-16 06:49:26.828731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.972 [2024-10-16 06:49:26.828739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.972 [2024-10-16 06:49:26.828746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.972 [2024-10-16 06:49:26.828752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.972 [2024-10-16 06:49:26.830444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.972 [2024-10-16 06:49:26.830449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.233 [2024-10-16 06:49:27.565655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.233 [2024-10-16 06:49:27.589935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.233 NULL1 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.233 Delay0 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.233 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.234 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.234 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.234 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2925561 00:06:28.234 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:28.234 06:49:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:28.234 [2024-10-16 06:49:27.706982] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:30.148 06:49:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:30.148 06:49:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.148 06:49:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Write completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.409 starting I/O failed: -6 00:06:30.409 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 starting I/O failed: -6 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 starting I/O failed: -6 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 [2024-10-16 06:49:29.838196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa3b000d310 is same with the state(6) to be set 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Write completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:30.410 Read completed with error (sct=0, sc=8) 00:06:31.352 [2024-10-16 06:49:30.817683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cda70 is same with the state(6) to be set 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 [2024-10-16 06:49:30.838975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cc930 is same with the state(6) to be set 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 [2024-10-16 06:49:30.839219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cc570 is same with the state(6) to be set 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.352 Write completed with error (sct=0, sc=8) 00:06:31.352 Read completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 [2024-10-16 06:49:30.840371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa3b000d640 is same with the state(6) to be set 00:06:31.353 Write completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Write completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Write completed with error (sct=0, sc=8) 00:06:31.353 Write completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Write completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 Write completed with error (sct=0, sc=8) 00:06:31.353 Write completed with error (sct=0, sc=8) 00:06:31.353 Write completed with error (sct=0, sc=8) 00:06:31.353 Read completed with error (sct=0, sc=8) 00:06:31.353 [2024-10-16 06:49:30.840737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa3b000cfe0 is same with the state(6) to be set 00:06:31.353 Initializing NVMe Controllers 00:06:31.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:31.353 Controller IO queue size 128, less than required. 00:06:31.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:31.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:31.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:31.353 Initialization complete. Launching workers. 00:06:31.353 ======================================================== 00:06:31.353 Latency(us) 00:06:31.353 Device Information : IOPS MiB/s Average min max 00:06:31.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.58 0.09 896328.17 452.48 1043260.58 00:06:31.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.23 0.08 983375.16 354.63 2003256.65 00:06:31.353 ======================================================== 00:06:31.353 Total : 348.81 0.17 936064.31 354.63 2003256.65 00:06:31.353 00:06:31.353 [2024-10-16 06:49:30.841303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cda70 (9): Bad file descriptor 00:06:31.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:31.353 06:49:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.353 06:49:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:31.353 06:49:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2925561 00:06:31.353 06:49:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2925561 00:06:31.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2925561) - No such process 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2925561 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2925561 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2925561 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.924 [2024-10-16 06:49:31.370617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2926249 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2926249 00:06:31.924 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:32.185 [2024-10-16 06:49:31.458283] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:32.447 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.447 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2926249 00:06:32.447 06:49:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.018 06:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.018 06:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2926249 00:06:33.018 06:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.589 06:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.589 06:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2926249 00:06:33.589 06:49:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.160 06:49:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.160 06:49:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2926249 00:06:34.160 06:49:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.420 06:49:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.420 06:49:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2926249 00:06:34.420 06:49:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.990 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.990 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2926249 00:06:34.990 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.251 Initializing NVMe Controllers 00:06:35.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:35.251 Controller IO queue size 128, less than required. 00:06:35.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:35.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:35.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:35.251 Initialization complete. Launching workers. 00:06:35.251 ======================================================== 00:06:35.251 Latency(us) 00:06:35.251 Device Information : IOPS MiB/s Average min max 00:06:35.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002097.82 1000231.58 1005979.71 00:06:35.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002898.26 1000273.91 1008318.20 00:06:35.251 ======================================================== 00:06:35.251 Total : 256.00 0.12 1002498.04 1000231.58 1008318.20 00:06:35.251 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2926249 00:06:35.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2926249) - No such process 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2926249 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.511 rmmod nvme_tcp 00:06:35.511 rmmod nvme_fabrics 00:06:35.511 rmmod nvme_keyring 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2925384 ']' 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2925384 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2925384 ']' 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2925384 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.511 06:49:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2925384 00:06:35.772 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.772 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.772 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2925384' 00:06:35.772 killing process with pid 2925384 00:06:35.772 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2925384 00:06:35.772 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2925384 00:06:35.772 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:35.772 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:35.772 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:35.773 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:35.773 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:35.773 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:35.773 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:35.773 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.773 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.773 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.773 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.773 06:49:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:38.328 00:06:38.328 real 0m18.352s 00:06:38.328 user 0m30.807s 00:06:38.328 sys 0m6.768s 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.328 ************************************ 00:06:38.328 END TEST nvmf_delete_subsystem 00:06:38.328 ************************************ 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.328 ************************************ 00:06:38.328 START TEST nvmf_host_management 00:06:38.328 ************************************ 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.328 * Looking for test storage... 00:06:38.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.328 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:38.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.329 --rc genhtml_branch_coverage=1 00:06:38.329 --rc genhtml_function_coverage=1 00:06:38.329 --rc genhtml_legend=1 00:06:38.329 --rc geninfo_all_blocks=1 00:06:38.329 --rc geninfo_unexecuted_blocks=1 00:06:38.329 00:06:38.329 ' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:38.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.329 --rc genhtml_branch_coverage=1 00:06:38.329 --rc genhtml_function_coverage=1 00:06:38.329 --rc genhtml_legend=1 00:06:38.329 --rc geninfo_all_blocks=1 00:06:38.329 --rc geninfo_unexecuted_blocks=1 00:06:38.329 00:06:38.329 ' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:38.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.329 --rc genhtml_branch_coverage=1 00:06:38.329 --rc genhtml_function_coverage=1 00:06:38.329 --rc genhtml_legend=1 00:06:38.329 --rc geninfo_all_blocks=1 00:06:38.329 --rc geninfo_unexecuted_blocks=1 00:06:38.329 00:06:38.329 ' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:38.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.329 --rc genhtml_branch_coverage=1 00:06:38.329 --rc genhtml_function_coverage=1 00:06:38.329 --rc genhtml_legend=1 00:06:38.329 --rc geninfo_all_blocks=1 00:06:38.329 --rc geninfo_unexecuted_blocks=1 00:06:38.329 00:06:38.329 ' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.329 06:49:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:46.618 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:46.618 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:46.618 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:46.618 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.618 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:06:46.618 00:06:46.618 --- 10.0.0.2 ping statistics --- 00:06:46.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.619 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:06:46.619 00:06:46.619 --- 10.0.0.1 ping statistics --- 00:06:46.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.619 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:46.619 06:49:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2931269 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2931269 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2931269 ']' 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.619 [2024-10-16 06:49:45.110648] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:06:46.619 [2024-10-16 06:49:45.110716] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.619 [2024-10-16 06:49:45.202781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.619 [2024-10-16 06:49:45.257364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.619 [2024-10-16 06:49:45.257416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.619 [2024-10-16 06:49:45.257425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.619 [2024-10-16 06:49:45.257432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.619 [2024-10-16 06:49:45.257438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.619 [2024-10-16 06:49:45.259623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.619 [2024-10-16 06:49:45.259783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.619 [2024-10-16 06:49:45.259959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.619 [2024-10-16 06:49:45.259960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.619 [2024-10-16 06:49:45.983604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.619 06:49:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:46.619 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:46.619 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:46.619 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.619 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.619 Malloc0 00:06:46.619 [2024-10-16 06:49:46.061538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.619 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.619 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:46.619 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:46.619 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2931636 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2931636 /var/tmp/bdevperf.sock 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2931636 ']' 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:46.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:46.881 { 00:06:46.881 "params": { 00:06:46.881 "name": "Nvme$subsystem", 00:06:46.881 "trtype": "$TEST_TRANSPORT", 00:06:46.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:46.881 "adrfam": "ipv4", 00:06:46.881 "trsvcid": "$NVMF_PORT", 00:06:46.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:46.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:46.881 "hdgst": ${hdgst:-false}, 00:06:46.881 "ddgst": ${ddgst:-false} 00:06:46.881 }, 00:06:46.881 "method": "bdev_nvme_attach_controller" 00:06:46.881 } 00:06:46.881 EOF 00:06:46.881 )") 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:46.881 06:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:46.881 "params": { 00:06:46.881 "name": "Nvme0", 00:06:46.881 "trtype": "tcp", 00:06:46.881 "traddr": "10.0.0.2", 00:06:46.881 "adrfam": "ipv4", 00:06:46.881 "trsvcid": "4420", 00:06:46.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:46.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:46.881 "hdgst": false, 00:06:46.881 "ddgst": false 00:06:46.881 }, 00:06:46.881 "method": "bdev_nvme_attach_controller" 00:06:46.881 }' 00:06:46.881 [2024-10-16 06:49:46.168818] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:06:46.881 [2024-10-16 06:49:46.168895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931636 ] 00:06:46.881 [2024-10-16 06:49:46.252057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.881 [2024-10-16 06:49:46.305478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.143 Running I/O for 10 seconds... 00:06:47.719 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.720 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.720 [2024-10-16 06:49:47.081294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.720 [2024-10-16 06:49:47.081795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.721 [2024-10-16 06:49:47.081802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.721 [2024-10-16 06:49:47.081808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d1f0 is same with the state(6) to be set 00:06:47.721 [2024-10-16 06:49:47.082042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.721 [2024-10-16 06:49:47.082725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.721 [2024-10-16 06:49:47.082735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.082991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.082998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.722 [2024-10-16 06:49:47.083224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.722 [2024-10-16 06:49:47.083233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa06370 is same with the state(6) to be set 00:06:47.722 [2024-10-16 06:49:47.083289] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa06370 was disconnected and freed. reset controller. 00:06:47.722 [2024-10-16 06:49:47.084537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:47.722 task offset: 90112 on job bdev=Nvme0n1 fails 00:06:47.722 00:06:47.722 Latency(us) 00:06:47.722 [2024-10-16T04:49:47.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:47.722 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:47.722 Job: Nvme0n1 ended in about 0.49 seconds with error 00:06:47.722 Verification LBA range: start 0x0 length 0x400 00:06:47.722 Nvme0n1 : 0.49 1449.87 90.62 131.81 0.00 39346.97 5461.33 36481.71 00:06:47.722 [2024-10-16T04:49:47.221Z] =================================================================================================================== 00:06:47.722 [2024-10-16T04:49:47.221Z] Total : 1449.87 90.62 131.81 0.00 39346.97 5461.33 36481.71 00:06:47.722 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.722 [2024-10-16 06:49:47.086655] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.722 [2024-10-16 06:49:47.086689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ed0c0 (9): Bad file descriptor 00:06:47.722 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:47.722 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.722 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.722 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.722 06:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:47.722 [2024-10-16 06:49:47.107677] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2931636 00:06:48.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2931636) - No such process 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:48.663 { 00:06:48.663 "params": { 00:06:48.663 "name": "Nvme$subsystem", 00:06:48.663 "trtype": "$TEST_TRANSPORT", 00:06:48.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:48.663 "adrfam": "ipv4", 00:06:48.663 "trsvcid": "$NVMF_PORT", 00:06:48.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:48.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:48.663 "hdgst": ${hdgst:-false}, 00:06:48.663 "ddgst": ${ddgst:-false} 00:06:48.663 }, 00:06:48.663 "method": "bdev_nvme_attach_controller" 00:06:48.663 } 00:06:48.663 EOF 00:06:48.663 )") 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:48.663 06:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:48.663 "params": { 00:06:48.663 "name": "Nvme0", 00:06:48.663 "trtype": "tcp", 00:06:48.663 "traddr": "10.0.0.2", 00:06:48.663 "adrfam": "ipv4", 00:06:48.663 "trsvcid": "4420", 00:06:48.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:48.663 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:48.663 "hdgst": false, 00:06:48.663 "ddgst": false 00:06:48.663 }, 00:06:48.663 "method": "bdev_nvme_attach_controller" 00:06:48.663 }' 00:06:48.663 [2024-10-16 06:49:48.159469] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:06:48.663 [2024-10-16 06:49:48.159525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931996 ] 00:06:48.924 [2024-10-16 06:49:48.237621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.924 [2024-10-16 06:49:48.272825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.185 Running I/O for 1 seconds... 00:06:50.128 1619.00 IOPS, 101.19 MiB/s 00:06:50.128 Latency(us) 00:06:50.128 [2024-10-16T04:49:49.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.128 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:50.128 Verification LBA range: start 0x0 length 0x400 00:06:50.128 Nvme0n1 : 1.01 1664.59 104.04 0.00 0.00 37704.75 3126.61 32331.09 00:06:50.128 [2024-10-16T04:49:49.627Z] =================================================================================================================== 00:06:50.128 [2024-10-16T04:49:49.627Z] Total : 1664.59 104.04 0.00 0.00 37704.75 3126.61 32331.09 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:50.129 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:50.129 rmmod nvme_tcp 00:06:50.390 rmmod nvme_fabrics 00:06:50.390 rmmod nvme_keyring 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2931269 ']' 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2931269 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2931269 ']' 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2931269 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2931269 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2931269' 00:06:50.390 killing process with pid 2931269 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2931269 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2931269 00:06:50.390 [2024-10-16 06:49:49.837408] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.390 06:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.942 06:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:52.942 06:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:52.942 00:06:52.942 real 0m14.620s 00:06:52.942 user 0m23.019s 00:06:52.942 sys 0m6.804s 00:06:52.942 06:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.942 06:49:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.942 ************************************ 00:06:52.942 END TEST nvmf_host_management 00:06:52.942 ************************************ 00:06:52.942 06:49:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:52.942 06:49:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:52.942 06:49:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.942 06:49:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.942 ************************************ 00:06:52.942 START TEST nvmf_lvol 00:06:52.942 ************************************ 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:52.942 * Looking for test storage... 00:06:52.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.942 --rc genhtml_branch_coverage=1 00:06:52.942 --rc genhtml_function_coverage=1 00:06:52.942 --rc genhtml_legend=1 00:06:52.942 --rc geninfo_all_blocks=1 00:06:52.942 --rc geninfo_unexecuted_blocks=1 00:06:52.942 00:06:52.942 ' 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.942 --rc genhtml_branch_coverage=1 00:06:52.942 --rc genhtml_function_coverage=1 00:06:52.942 --rc genhtml_legend=1 00:06:52.942 --rc geninfo_all_blocks=1 00:06:52.942 --rc geninfo_unexecuted_blocks=1 00:06:52.942 00:06:52.942 ' 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.942 --rc genhtml_branch_coverage=1 00:06:52.942 --rc genhtml_function_coverage=1 00:06:52.942 --rc genhtml_legend=1 00:06:52.942 --rc geninfo_all_blocks=1 00:06:52.942 --rc geninfo_unexecuted_blocks=1 00:06:52.942 00:06:52.942 ' 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.942 --rc genhtml_branch_coverage=1 00:06:52.942 --rc genhtml_function_coverage=1 00:06:52.942 --rc genhtml_legend=1 00:06:52.942 --rc geninfo_all_blocks=1 00:06:52.942 --rc geninfo_unexecuted_blocks=1 00:06:52.942 00:06:52.942 ' 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.942 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:52.943 06:49:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:01.085 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:01.085 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:01.085 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:01.085 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:01.085 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:01.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:07:01.086 00:07:01.086 --- 10.0.0.2 ping statistics --- 00:07:01.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.086 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:07:01.086 00:07:01.086 --- 10.0.0.1 ping statistics --- 00:07:01.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.086 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2936678 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2936678 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2936678 ']' 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.086 06:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.086 [2024-10-16 06:49:59.835995] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:07:01.086 [2024-10-16 06:49:59.836064] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.086 [2024-10-16 06:49:59.924036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.086 [2024-10-16 06:49:59.976127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.086 [2024-10-16 06:49:59.976196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.086 [2024-10-16 06:49:59.976205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.086 [2024-10-16 06:49:59.976213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.086 [2024-10-16 06:49:59.976219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.086 [2024-10-16 06:49:59.978062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.086 [2024-10-16 06:49:59.978223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.086 [2024-10-16 06:49:59.978224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.347 06:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.347 06:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:01.347 06:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:01.347 06:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:01.347 06:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.347 06:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.347 06:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:01.608 [2024-10-16 06:50:00.874193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.608 06:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:01.869 06:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:01.869 06:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:01.869 06:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:01.869 06:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:02.131 06:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:02.392 06:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4850d325-e7ff-4867-87c4-b5e08103c741 00:07:02.392 06:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4850d325-e7ff-4867-87c4-b5e08103c741 lvol 20 00:07:02.653 06:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b43a7ed3-f478-4dd6-94dc-944ef852d72f 00:07:02.653 06:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:02.913 06:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b43a7ed3-f478-4dd6-94dc-944ef852d72f 00:07:02.913 06:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:03.174 [2024-10-16 06:50:02.491976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.174 06:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.435 06:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2937144 00:07:03.435 06:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:03.435 06:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:04.374 06:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b43a7ed3-f478-4dd6-94dc-944ef852d72f MY_SNAPSHOT 00:07:04.635 06:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1d10664a-30d8-496b-86c5-eef49fc0c521 00:07:04.635 06:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b43a7ed3-f478-4dd6-94dc-944ef852d72f 30 00:07:04.895 06:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1d10664a-30d8-496b-86c5-eef49fc0c521 MY_CLONE 00:07:04.895 06:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=877768c6-1724-43a3-b40e-842859ef69c4 00:07:04.895 06:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 877768c6-1724-43a3-b40e-842859ef69c4 00:07:05.468 06:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2937144 00:07:15.470 Initializing NVMe Controllers 00:07:15.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:15.470 Controller IO queue size 128, less than required. 00:07:15.470 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:15.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:15.470 Initialization complete. Launching workers. 00:07:15.470 ======================================================== 00:07:15.470 Latency(us) 00:07:15.470 Device Information : IOPS MiB/s Average min max 00:07:15.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15813.40 61.77 8096.31 1585.56 57033.65 00:07:15.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17146.10 66.98 7466.40 441.07 50531.31 00:07:15.470 ======================================================== 00:07:15.470 Total : 32959.50 128.75 7768.62 441.07 57033.65 00:07:15.470 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b43a7ed3-f478-4dd6-94dc-944ef852d72f 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4850d325-e7ff-4867-87c4-b5e08103c741 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.470 rmmod nvme_tcp 00:07:15.470 rmmod nvme_fabrics 00:07:15.470 rmmod nvme_keyring 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2936678 ']' 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2936678 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2936678 ']' 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2936678 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2936678 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2936678' 00:07:15.470 killing process with pid 2936678 00:07:15.470 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2936678 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2936678 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.471 06:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:16.858 00:07:16.858 real 0m24.044s 00:07:16.858 user 1m5.300s 00:07:16.858 sys 0m8.555s 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.858 ************************************ 00:07:16.858 END TEST nvmf_lvol 00:07:16.858 ************************************ 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.858 ************************************ 00:07:16.858 START TEST nvmf_lvs_grow 00:07:16.858 ************************************ 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:16.858 * Looking for test storage... 00:07:16.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:16.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.858 --rc genhtml_branch_coverage=1 00:07:16.858 --rc genhtml_function_coverage=1 00:07:16.858 --rc genhtml_legend=1 00:07:16.858 --rc geninfo_all_blocks=1 00:07:16.858 --rc geninfo_unexecuted_blocks=1 00:07:16.858 00:07:16.858 ' 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:16.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.858 --rc genhtml_branch_coverage=1 00:07:16.858 --rc genhtml_function_coverage=1 00:07:16.858 --rc genhtml_legend=1 00:07:16.858 --rc geninfo_all_blocks=1 00:07:16.858 --rc geninfo_unexecuted_blocks=1 00:07:16.858 00:07:16.858 ' 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:16.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.858 --rc genhtml_branch_coverage=1 00:07:16.858 --rc genhtml_function_coverage=1 00:07:16.858 --rc genhtml_legend=1 00:07:16.858 --rc geninfo_all_blocks=1 00:07:16.858 --rc geninfo_unexecuted_blocks=1 00:07:16.858 00:07:16.858 ' 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:16.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.858 --rc genhtml_branch_coverage=1 00:07:16.858 --rc genhtml_function_coverage=1 00:07:16.858 --rc genhtml_legend=1 00:07:16.858 --rc geninfo_all_blocks=1 00:07:16.858 --rc geninfo_unexecuted_blocks=1 00:07:16.858 00:07:16.858 ' 00:07:16.858 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.121 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.122 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.122 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:17.122 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:17.122 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:17.122 06:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:25.264 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:25.264 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.264 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:25.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:25.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:07:25.265 00:07:25.265 --- 10.0.0.2 ping statistics --- 00:07:25.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.265 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:07:25.265 00:07:25.265 --- 10.0.0.1 ping statistics --- 00:07:25.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.265 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2943751 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2943751 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2943751 ']' 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.265 06:50:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.265 [2024-10-16 06:50:23.977547] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:07:25.265 [2024-10-16 06:50:23.977610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.265 [2024-10-16 06:50:24.066332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.265 [2024-10-16 06:50:24.116948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.265 [2024-10-16 06:50:24.117003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.265 [2024-10-16 06:50:24.117011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.265 [2024-10-16 06:50:24.117019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.265 [2024-10-16 06:50:24.117025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.265 [2024-10-16 06:50:24.117783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.527 06:50:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.527 06:50:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:25.527 06:50:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:25.527 06:50:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.527 06:50:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.527 06:50:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.527 06:50:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:25.527 [2024-10-16 06:50:25.016179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.788 ************************************ 00:07:25.788 START TEST lvs_grow_clean 00:07:25.788 ************************************ 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.788 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.049 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:26.049 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:26.049 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fd9686d6-6045-4278-9265-42233f3c16b0 00:07:26.049 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:26.049 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:26.310 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:26.310 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:26.310 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd9686d6-6045-4278-9265-42233f3c16b0 lvol 150 00:07:26.570 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=be2509c4-a864-476c-a0de-34550f9961fa 00:07:26.570 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.570 06:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:26.570 [2024-10-16 06:50:26.048641] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:26.570 [2024-10-16 06:50:26.048716] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:26.570 true 00:07:26.570 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:26.831 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:26.831 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:26.831 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.091 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 be2509c4-a864-476c-a0de-34550f9961fa 00:07:27.352 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:27.352 [2024-10-16 06:50:26.775019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.352 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2944358 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2944358 /var/tmp/bdevperf.sock 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2944358 ']' 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:27.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.614 06:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:27.614 [2024-10-16 06:50:27.015185] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:07:27.614 [2024-10-16 06:50:27.015255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944358 ] 00:07:27.614 [2024-10-16 06:50:27.095591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.884 [2024-10-16 06:50:27.148865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.456 06:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.456 06:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:28.456 06:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:29.029 Nvme0n1 00:07:29.029 06:50:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:29.029 [ 00:07:29.029 { 00:07:29.029 "name": "Nvme0n1", 00:07:29.029 "aliases": [ 00:07:29.029 "be2509c4-a864-476c-a0de-34550f9961fa" 00:07:29.029 ], 00:07:29.029 "product_name": "NVMe disk", 00:07:29.029 "block_size": 4096, 00:07:29.029 "num_blocks": 38912, 00:07:29.029 "uuid": "be2509c4-a864-476c-a0de-34550f9961fa", 00:07:29.029 "numa_id": 0, 00:07:29.029 "assigned_rate_limits": { 00:07:29.029 "rw_ios_per_sec": 0, 00:07:29.029 "rw_mbytes_per_sec": 0, 00:07:29.029 "r_mbytes_per_sec": 0, 00:07:29.029 "w_mbytes_per_sec": 0 00:07:29.029 }, 00:07:29.029 "claimed": false, 00:07:29.029 "zoned": false, 00:07:29.029 "supported_io_types": { 00:07:29.029 "read": true, 00:07:29.029 "write": true, 00:07:29.029 "unmap": true, 00:07:29.029 "flush": true, 00:07:29.029 "reset": true, 00:07:29.029 "nvme_admin": true, 00:07:29.029 "nvme_io": true, 00:07:29.029 "nvme_io_md": false, 00:07:29.029 "write_zeroes": true, 00:07:29.029 "zcopy": false, 00:07:29.029 "get_zone_info": false, 00:07:29.029 "zone_management": false, 00:07:29.029 "zone_append": false, 00:07:29.029 "compare": true, 00:07:29.029 "compare_and_write": true, 00:07:29.029 "abort": true, 00:07:29.029 "seek_hole": false, 00:07:29.029 "seek_data": false, 00:07:29.029 "copy": true, 00:07:29.029 "nvme_iov_md": false 00:07:29.029 }, 00:07:29.029 "memory_domains": [ 00:07:29.029 { 00:07:29.029 "dma_device_id": "system", 00:07:29.029 "dma_device_type": 1 00:07:29.029 } 00:07:29.029 ], 00:07:29.029 "driver_specific": { 00:07:29.029 "nvme": [ 00:07:29.029 { 00:07:29.029 "trid": { 00:07:29.029 "trtype": "TCP", 00:07:29.029 "adrfam": "IPv4", 00:07:29.029 "traddr": "10.0.0.2", 00:07:29.029 "trsvcid": "4420", 00:07:29.029 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:29.029 }, 00:07:29.029 "ctrlr_data": { 00:07:29.029 "cntlid": 1, 00:07:29.029 "vendor_id": "0x8086", 00:07:29.029 "model_number": "SPDK bdev Controller", 00:07:29.029 "serial_number": "SPDK0", 00:07:29.029 "firmware_revision": "25.01", 00:07:29.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.029 "oacs": { 00:07:29.029 "security": 0, 00:07:29.029 "format": 0, 00:07:29.029 "firmware": 0, 00:07:29.029 "ns_manage": 0 00:07:29.029 }, 00:07:29.029 "multi_ctrlr": true, 00:07:29.029 "ana_reporting": false 00:07:29.029 }, 00:07:29.029 "vs": { 00:07:29.029 "nvme_version": "1.3" 00:07:29.029 }, 00:07:29.029 "ns_data": { 00:07:29.029 "id": 1, 00:07:29.029 "can_share": true 00:07:29.029 } 00:07:29.029 } 00:07:29.029 ], 00:07:29.029 "mp_policy": "active_passive" 00:07:29.029 } 00:07:29.029 } 00:07:29.029 ] 00:07:29.029 06:50:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2944579 00:07:29.029 06:50:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:29.029 06:50:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:29.291 Running I/O for 10 seconds... 00:07:30.233 Latency(us) 00:07:30.233 [2024-10-16T04:50:29.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.233 Nvme0n1 : 1.00 25019.00 97.73 0.00 0.00 0.00 0.00 0.00 00:07:30.233 [2024-10-16T04:50:29.732Z] =================================================================================================================== 00:07:30.233 [2024-10-16T04:50:29.732Z] Total : 25019.00 97.73 0.00 0.00 0.00 0.00 0.00 00:07:30.233 00:07:31.177 06:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:31.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.177 Nvme0n1 : 2.00 25174.00 98.34 0.00 0.00 0.00 0.00 0.00 00:07:31.177 [2024-10-16T04:50:30.676Z] =================================================================================================================== 00:07:31.177 [2024-10-16T04:50:30.676Z] Total : 25174.00 98.34 0.00 0.00 0.00 0.00 0.00 00:07:31.177 00:07:31.177 true 00:07:31.177 06:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:31.177 06:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:31.438 06:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:31.438 06:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:31.438 06:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2944579 00:07:32.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.380 Nvme0n1 : 3.00 25262.00 98.68 0.00 0.00 0.00 0.00 0.00 00:07:32.380 [2024-10-16T04:50:31.879Z] =================================================================================================================== 00:07:32.380 [2024-10-16T04:50:31.879Z] Total : 25262.00 98.68 0.00 0.00 0.00 0.00 0.00 00:07:32.380 00:07:33.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.323 Nvme0n1 : 4.00 25330.25 98.95 0.00 0.00 0.00 0.00 0.00 00:07:33.323 [2024-10-16T04:50:32.822Z] =================================================================================================================== 00:07:33.323 [2024-10-16T04:50:32.822Z] Total : 25330.25 98.95 0.00 0.00 0.00 0.00 0.00 00:07:33.323 00:07:34.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.266 Nvme0n1 : 5.00 25371.20 99.11 0.00 0.00 0.00 0.00 0.00 00:07:34.266 [2024-10-16T04:50:33.765Z] =================================================================================================================== 00:07:34.266 [2024-10-16T04:50:33.765Z] Total : 25371.20 99.11 0.00 0.00 0.00 0.00 0.00 00:07:34.266 00:07:35.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.208 Nvme0n1 : 6.00 25398.83 99.21 0.00 0.00 0.00 0.00 0.00 00:07:35.208 [2024-10-16T04:50:34.707Z] =================================================================================================================== 00:07:35.208 [2024-10-16T04:50:34.707Z] Total : 25398.83 99.21 0.00 0.00 0.00 0.00 0.00 00:07:35.208 00:07:36.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.149 Nvme0n1 : 7.00 25423.00 99.31 0.00 0.00 0.00 0.00 0.00 00:07:36.149 [2024-10-16T04:50:35.648Z] =================================================================================================================== 00:07:36.149 [2024-10-16T04:50:35.648Z] Total : 25423.00 99.31 0.00 0.00 0.00 0.00 0.00 00:07:36.149 00:07:37.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.532 Nvme0n1 : 8.00 25440.12 99.38 0.00 0.00 0.00 0.00 0.00 00:07:37.532 [2024-10-16T04:50:37.031Z] =================================================================================================================== 00:07:37.532 [2024-10-16T04:50:37.031Z] Total : 25440.12 99.38 0.00 0.00 0.00 0.00 0.00 00:07:37.532 00:07:38.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.103 Nvme0n1 : 9.00 25457.67 99.44 0.00 0.00 0.00 0.00 0.00 00:07:38.103 [2024-10-16T04:50:37.602Z] =================================================================================================================== 00:07:38.103 [2024-10-16T04:50:37.602Z] Total : 25457.67 99.44 0.00 0.00 0.00 0.00 0.00 00:07:38.103 00:07:39.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.486 Nvme0n1 : 10.00 25462.60 99.46 0.00 0.00 0.00 0.00 0.00 00:07:39.486 [2024-10-16T04:50:38.985Z] =================================================================================================================== 00:07:39.486 [2024-10-16T04:50:38.985Z] Total : 25462.60 99.46 0.00 0.00 0.00 0.00 0.00 00:07:39.486 00:07:39.486 00:07:39.486 Latency(us) 00:07:39.486 [2024-10-16T04:50:38.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.486 Nvme0n1 : 10.00 25461.68 99.46 0.00 0.00 5024.08 2525.87 11250.35 00:07:39.486 [2024-10-16T04:50:38.985Z] =================================================================================================================== 00:07:39.486 [2024-10-16T04:50:38.985Z] Total : 25461.68 99.46 0.00 0.00 5024.08 2525.87 11250.35 00:07:39.486 { 00:07:39.487 "results": [ 00:07:39.487 { 00:07:39.487 "job": "Nvme0n1", 00:07:39.487 "core_mask": "0x2", 00:07:39.487 "workload": "randwrite", 00:07:39.487 "status": "finished", 00:07:39.487 "queue_depth": 128, 00:07:39.487 "io_size": 4096, 00:07:39.487 "runtime": 10.004052, 00:07:39.487 "iops": 25461.682926078352, 00:07:39.487 "mibps": 99.45969892999356, 00:07:39.487 "io_failed": 0, 00:07:39.487 "io_timeout": 0, 00:07:39.487 "avg_latency_us": 5024.076408710217, 00:07:39.487 "min_latency_us": 2525.866666666667, 00:07:39.487 "max_latency_us": 11250.346666666666 00:07:39.487 } 00:07:39.487 ], 00:07:39.487 "core_count": 1 00:07:39.487 } 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2944358 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2944358 ']' 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2944358 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2944358 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2944358' 00:07:39.487 killing process with pid 2944358 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2944358 00:07:39.487 Received shutdown signal, test time was about 10.000000 seconds 00:07:39.487 00:07:39.487 Latency(us) 00:07:39.487 [2024-10-16T04:50:38.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.487 [2024-10-16T04:50:38.986Z] =================================================================================================================== 00:07:39.487 [2024-10-16T04:50:38.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2944358 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.487 06:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:39.747 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:39.747 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.007 [2024-10-16 06:50:39.456307] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:40.007 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:40.268 request: 00:07:40.268 { 00:07:40.268 "uuid": "fd9686d6-6045-4278-9265-42233f3c16b0", 00:07:40.268 "method": "bdev_lvol_get_lvstores", 00:07:40.268 "req_id": 1 00:07:40.268 } 00:07:40.268 Got JSON-RPC error response 00:07:40.268 response: 00:07:40.268 { 00:07:40.268 "code": -19, 00:07:40.268 "message": "No such device" 00:07:40.268 } 00:07:40.268 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:40.268 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.268 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.268 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.268 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:40.529 aio_bdev 00:07:40.529 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev be2509c4-a864-476c-a0de-34550f9961fa 00:07:40.529 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=be2509c4-a864-476c-a0de-34550f9961fa 00:07:40.529 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.529 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:40.529 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.529 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.529 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:40.529 06:50:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b be2509c4-a864-476c-a0de-34550f9961fa -t 2000 00:07:40.790 [ 00:07:40.790 { 00:07:40.790 "name": "be2509c4-a864-476c-a0de-34550f9961fa", 00:07:40.790 "aliases": [ 00:07:40.790 "lvs/lvol" 00:07:40.790 ], 00:07:40.790 "product_name": "Logical Volume", 00:07:40.790 "block_size": 4096, 00:07:40.790 "num_blocks": 38912, 00:07:40.790 "uuid": "be2509c4-a864-476c-a0de-34550f9961fa", 00:07:40.790 "assigned_rate_limits": { 00:07:40.790 "rw_ios_per_sec": 0, 00:07:40.790 "rw_mbytes_per_sec": 0, 00:07:40.790 "r_mbytes_per_sec": 0, 00:07:40.790 "w_mbytes_per_sec": 0 00:07:40.790 }, 00:07:40.790 "claimed": false, 00:07:40.790 "zoned": false, 00:07:40.790 "supported_io_types": { 00:07:40.790 "read": true, 00:07:40.790 "write": true, 00:07:40.790 "unmap": true, 00:07:40.790 "flush": false, 00:07:40.790 "reset": true, 00:07:40.790 "nvme_admin": false, 00:07:40.790 "nvme_io": false, 00:07:40.790 "nvme_io_md": false, 00:07:40.790 "write_zeroes": true, 00:07:40.790 "zcopy": false, 00:07:40.791 "get_zone_info": false, 00:07:40.791 "zone_management": false, 00:07:40.791 "zone_append": false, 00:07:40.791 "compare": false, 00:07:40.791 "compare_and_write": false, 00:07:40.791 "abort": false, 00:07:40.791 "seek_hole": true, 00:07:40.791 "seek_data": true, 00:07:40.791 "copy": false, 00:07:40.791 "nvme_iov_md": false 00:07:40.791 }, 00:07:40.791 "driver_specific": { 00:07:40.791 "lvol": { 00:07:40.791 "lvol_store_uuid": "fd9686d6-6045-4278-9265-42233f3c16b0", 00:07:40.791 "base_bdev": "aio_bdev", 00:07:40.791 "thin_provision": false, 00:07:40.791 "num_allocated_clusters": 38, 00:07:40.791 "snapshot": false, 00:07:40.791 "clone": false, 00:07:40.791 "esnap_clone": false 00:07:40.791 } 00:07:40.791 } 00:07:40.791 } 00:07:40.791 ] 00:07:40.791 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:40.791 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:40.791 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:41.051 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:41.051 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:41.051 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:41.051 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:41.051 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete be2509c4-a864-476c-a0de-34550f9961fa 00:07:41.312 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd9686d6-6045-4278-9265-42233f3c16b0 00:07:41.572 06:50:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:41.572 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.572 00:07:41.572 real 0m15.950s 00:07:41.572 user 0m15.784s 00:07:41.572 sys 0m1.367s 00:07:41.572 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.572 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:41.572 ************************************ 00:07:41.572 END TEST lvs_grow_clean 00:07:41.572 ************************************ 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.833 ************************************ 00:07:41.833 START TEST lvs_grow_dirty 00:07:41.833 ************************************ 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.833 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.094 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:42.094 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:42.094 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:42.094 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:42.094 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:42.354 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:42.354 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:42.354 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f lvol 150 00:07:42.637 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ba8f9982-7cc6-4e33-aa14-5d681a85c39e 00:07:42.637 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.637 06:50:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:42.637 [2024-10-16 06:50:42.029199] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:42.637 [2024-10-16 06:50:42.029243] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:42.637 true 00:07:42.638 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:42.638 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:42.933 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:42.933 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:42.933 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ba8f9982-7cc6-4e33-aa14-5d681a85c39e 00:07:43.220 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:43.220 [2024-10-16 06:50:42.703160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.220 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2947567 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2947567 /var/tmp/bdevperf.sock 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2947567 ']' 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:43.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.480 06:50:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.480 [2024-10-16 06:50:42.918075] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:07:43.480 [2024-10-16 06:50:42.918129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947567 ] 00:07:43.740 [2024-10-16 06:50:42.994648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.740 [2024-10-16 06:50:43.024466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.311 06:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.311 06:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:44.311 06:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:44.881 Nvme0n1 00:07:44.881 06:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:44.881 [ 00:07:44.881 { 00:07:44.881 "name": "Nvme0n1", 00:07:44.881 "aliases": [ 00:07:44.881 "ba8f9982-7cc6-4e33-aa14-5d681a85c39e" 00:07:44.881 ], 00:07:44.881 "product_name": "NVMe disk", 00:07:44.881 "block_size": 4096, 00:07:44.881 "num_blocks": 38912, 00:07:44.881 "uuid": "ba8f9982-7cc6-4e33-aa14-5d681a85c39e", 00:07:44.881 "numa_id": 0, 00:07:44.881 "assigned_rate_limits": { 00:07:44.881 "rw_ios_per_sec": 0, 00:07:44.881 "rw_mbytes_per_sec": 0, 00:07:44.881 "r_mbytes_per_sec": 0, 00:07:44.881 "w_mbytes_per_sec": 0 00:07:44.881 }, 00:07:44.881 "claimed": false, 00:07:44.881 "zoned": false, 00:07:44.881 "supported_io_types": { 00:07:44.881 "read": true, 00:07:44.881 "write": true, 00:07:44.881 "unmap": true, 00:07:44.881 "flush": true, 00:07:44.881 "reset": true, 00:07:44.881 "nvme_admin": true, 00:07:44.881 "nvme_io": true, 00:07:44.881 "nvme_io_md": false, 00:07:44.881 "write_zeroes": true, 00:07:44.881 "zcopy": false, 00:07:44.881 "get_zone_info": false, 00:07:44.881 "zone_management": false, 00:07:44.881 "zone_append": false, 00:07:44.881 "compare": true, 00:07:44.881 "compare_and_write": true, 00:07:44.881 "abort": true, 00:07:44.881 "seek_hole": false, 00:07:44.881 "seek_data": false, 00:07:44.881 "copy": true, 00:07:44.881 "nvme_iov_md": false 00:07:44.881 }, 00:07:44.881 "memory_domains": [ 00:07:44.881 { 00:07:44.881 "dma_device_id": "system", 00:07:44.881 "dma_device_type": 1 00:07:44.882 } 00:07:44.882 ], 00:07:44.882 "driver_specific": { 00:07:44.882 "nvme": [ 00:07:44.882 { 00:07:44.882 "trid": { 00:07:44.882 "trtype": "TCP", 00:07:44.882 "adrfam": "IPv4", 00:07:44.882 "traddr": "10.0.0.2", 00:07:44.882 "trsvcid": "4420", 00:07:44.882 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:44.882 }, 00:07:44.882 "ctrlr_data": { 00:07:44.882 "cntlid": 1, 00:07:44.882 "vendor_id": "0x8086", 00:07:44.882 "model_number": "SPDK bdev Controller", 00:07:44.882 "serial_number": "SPDK0", 00:07:44.882 "firmware_revision": "25.01", 00:07:44.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.882 "oacs": { 00:07:44.882 "security": 0, 00:07:44.882 "format": 0, 00:07:44.882 "firmware": 0, 00:07:44.882 "ns_manage": 0 00:07:44.882 }, 00:07:44.882 "multi_ctrlr": true, 00:07:44.882 "ana_reporting": false 00:07:44.882 }, 00:07:44.882 "vs": { 00:07:44.882 "nvme_version": "1.3" 00:07:44.882 }, 00:07:44.882 "ns_data": { 00:07:44.882 "id": 1, 00:07:44.882 "can_share": true 00:07:44.882 } 00:07:44.882 } 00:07:44.882 ], 00:07:44.882 "mp_policy": "active_passive" 00:07:44.882 } 00:07:44.882 } 00:07:44.882 ] 00:07:44.882 06:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2947907 00:07:44.882 06:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:44.882 06:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:44.882 Running I/O for 10 seconds... 00:07:46.265 Latency(us) 00:07:46.265 [2024-10-16T04:50:45.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.265 Nvme0n1 : 1.00 25096.00 98.03 0.00 0.00 0.00 0.00 0.00 00:07:46.265 [2024-10-16T04:50:45.764Z] =================================================================================================================== 00:07:46.265 [2024-10-16T04:50:45.764Z] Total : 25096.00 98.03 0.00 0.00 0.00 0.00 0.00 00:07:46.265 00:07:46.837 06:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:47.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.098 Nvme0n1 : 2.00 25250.00 98.63 0.00 0.00 0.00 0.00 0.00 00:07:47.098 [2024-10-16T04:50:46.597Z] =================================================================================================================== 00:07:47.098 [2024-10-16T04:50:46.597Z] Total : 25250.00 98.63 0.00 0.00 0.00 0.00 0.00 00:07:47.098 00:07:47.098 true 00:07:47.098 06:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:47.098 06:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:47.358 06:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:47.358 06:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:47.358 06:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2947907 00:07:47.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.938 Nvme0n1 : 3.00 25336.33 98.97 0.00 0.00 0.00 0.00 0.00 00:07:47.938 [2024-10-16T04:50:47.437Z] =================================================================================================================== 00:07:47.938 [2024-10-16T04:50:47.437Z] Total : 25336.33 98.97 0.00 0.00 0.00 0.00 0.00 00:07:47.938 00:07:48.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.880 Nvme0n1 : 4.00 25385.25 99.16 0.00 0.00 0.00 0.00 0.00 00:07:48.880 [2024-10-16T04:50:48.379Z] =================================================================================================================== 00:07:48.880 [2024-10-16T04:50:48.379Z] Total : 25385.25 99.16 0.00 0.00 0.00 0.00 0.00 00:07:48.880 00:07:50.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.263 Nvme0n1 : 5.00 25409.40 99.26 0.00 0.00 0.00 0.00 0.00 00:07:50.263 [2024-10-16T04:50:49.762Z] =================================================================================================================== 00:07:50.263 [2024-10-16T04:50:49.762Z] Total : 25409.40 99.26 0.00 0.00 0.00 0.00 0.00 00:07:50.263 00:07:51.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.203 Nvme0n1 : 6.00 25435.50 99.36 0.00 0.00 0.00 0.00 0.00 00:07:51.203 [2024-10-16T04:50:50.702Z] =================================================================================================================== 00:07:51.203 [2024-10-16T04:50:50.702Z] Total : 25435.50 99.36 0.00 0.00 0.00 0.00 0.00 00:07:51.203 00:07:52.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.144 Nvme0n1 : 7.00 25449.71 99.41 0.00 0.00 0.00 0.00 0.00 00:07:52.144 [2024-10-16T04:50:51.643Z] =================================================================================================================== 00:07:52.144 [2024-10-16T04:50:51.643Z] Total : 25449.71 99.41 0.00 0.00 0.00 0.00 0.00 00:07:52.144 00:07:53.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.085 Nvme0n1 : 8.00 25468.12 99.48 0.00 0.00 0.00 0.00 0.00 00:07:53.085 [2024-10-16T04:50:52.584Z] =================================================================================================================== 00:07:53.085 [2024-10-16T04:50:52.584Z] Total : 25468.12 99.48 0.00 0.00 0.00 0.00 0.00 00:07:53.085 00:07:54.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.026 Nvme0n1 : 9.00 25482.78 99.54 0.00 0.00 0.00 0.00 0.00 00:07:54.026 [2024-10-16T04:50:53.525Z] =================================================================================================================== 00:07:54.026 [2024-10-16T04:50:53.525Z] Total : 25482.78 99.54 0.00 0.00 0.00 0.00 0.00 00:07:54.026 00:07:54.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.967 Nvme0n1 : 10.00 25494.30 99.59 0.00 0.00 0.00 0.00 0.00 00:07:54.967 [2024-10-16T04:50:54.466Z] =================================================================================================================== 00:07:54.967 [2024-10-16T04:50:54.466Z] Total : 25494.30 99.59 0.00 0.00 0.00 0.00 0.00 00:07:54.967 00:07:54.967 00:07:54.967 Latency(us) 00:07:54.967 [2024-10-16T04:50:54.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.967 Nvme0n1 : 10.00 25491.51 99.58 0.00 0.00 5017.98 3072.00 9939.63 00:07:54.967 [2024-10-16T04:50:54.466Z] =================================================================================================================== 00:07:54.967 [2024-10-16T04:50:54.466Z] Total : 25491.51 99.58 0.00 0.00 5017.98 3072.00 9939.63 00:07:54.967 { 00:07:54.967 "results": [ 00:07:54.967 { 00:07:54.967 "job": "Nvme0n1", 00:07:54.967 "core_mask": "0x2", 00:07:54.967 "workload": "randwrite", 00:07:54.967 "status": "finished", 00:07:54.967 "queue_depth": 128, 00:07:54.967 "io_size": 4096, 00:07:54.967 "runtime": 10.003645, 00:07:54.967 "iops": 25491.50834520817, 00:07:54.967 "mibps": 99.57620447346942, 00:07:54.967 "io_failed": 0, 00:07:54.967 "io_timeout": 0, 00:07:54.967 "avg_latency_us": 5017.981585309742, 00:07:54.967 "min_latency_us": 3072.0, 00:07:54.967 "max_latency_us": 9939.626666666667 00:07:54.967 } 00:07:54.967 ], 00:07:54.967 "core_count": 1 00:07:54.967 } 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2947567 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2947567 ']' 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2947567 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2947567 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2947567' 00:07:54.967 killing process with pid 2947567 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2947567 00:07:54.967 Received shutdown signal, test time was about 10.000000 seconds 00:07:54.967 00:07:54.967 Latency(us) 00:07:54.967 [2024-10-16T04:50:54.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.967 [2024-10-16T04:50:54.466Z] =================================================================================================================== 00:07:54.967 [2024-10-16T04:50:54.466Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:54.967 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2947567 00:07:55.228 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.228 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.489 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:55.489 06:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2943751 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2943751 00:07:55.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2943751 Killed "${NVMF_APP[@]}" "$@" 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2949936 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2949936 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2949936 ']' 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:55.749 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.749 [2024-10-16 06:50:55.133978] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:07:55.749 [2024-10-16 06:50:55.134036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.749 [2024-10-16 06:50:55.216372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.749 [2024-10-16 06:50:55.245796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.749 [2024-10-16 06:50:55.245823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.749 [2024-10-16 06:50:55.245828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.749 [2024-10-16 06:50:55.245833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.749 [2024-10-16 06:50:55.245837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.749 [2024-10-16 06:50:55.246285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.690 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.690 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:56.690 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:56.690 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:56.690 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.690 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.690 06:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:56.690 [2024-10-16 06:50:56.111313] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:56.690 [2024-10-16 06:50:56.111413] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:56.690 [2024-10-16 06:50:56.111436] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:56.690 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:56.690 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ba8f9982-7cc6-4e33-aa14-5d681a85c39e 00:07:56.690 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ba8f9982-7cc6-4e33-aa14-5d681a85c39e 00:07:56.690 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:56.690 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:56.690 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:56.690 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:56.690 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:56.950 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ba8f9982-7cc6-4e33-aa14-5d681a85c39e -t 2000 00:07:56.950 [ 00:07:56.950 { 00:07:56.950 "name": "ba8f9982-7cc6-4e33-aa14-5d681a85c39e", 00:07:56.950 "aliases": [ 00:07:56.950 "lvs/lvol" 00:07:56.950 ], 00:07:56.950 "product_name": "Logical Volume", 00:07:56.950 "block_size": 4096, 00:07:56.950 "num_blocks": 38912, 00:07:56.950 "uuid": "ba8f9982-7cc6-4e33-aa14-5d681a85c39e", 00:07:56.950 "assigned_rate_limits": { 00:07:56.950 "rw_ios_per_sec": 0, 00:07:56.950 "rw_mbytes_per_sec": 0, 00:07:56.950 "r_mbytes_per_sec": 0, 00:07:56.950 "w_mbytes_per_sec": 0 00:07:56.950 }, 00:07:56.950 "claimed": false, 00:07:56.950 "zoned": false, 00:07:56.950 "supported_io_types": { 00:07:56.950 "read": true, 00:07:56.950 "write": true, 00:07:56.950 "unmap": true, 00:07:56.950 "flush": false, 00:07:56.950 "reset": true, 00:07:56.950 "nvme_admin": false, 00:07:56.950 "nvme_io": false, 00:07:56.950 "nvme_io_md": false, 00:07:56.950 "write_zeroes": true, 00:07:56.950 "zcopy": false, 00:07:56.950 "get_zone_info": false, 00:07:56.950 "zone_management": false, 00:07:56.950 "zone_append": false, 00:07:56.950 "compare": false, 00:07:56.950 "compare_and_write": false, 00:07:56.950 "abort": false, 00:07:56.950 "seek_hole": true, 00:07:56.950 "seek_data": true, 00:07:56.950 "copy": false, 00:07:56.950 "nvme_iov_md": false 00:07:56.950 }, 00:07:56.950 "driver_specific": { 00:07:56.950 "lvol": { 00:07:56.950 "lvol_store_uuid": "1a65e2c9-a303-408f-ba32-b7bbe5eb540f", 00:07:56.950 "base_bdev": "aio_bdev", 00:07:56.950 "thin_provision": false, 00:07:56.950 "num_allocated_clusters": 38, 00:07:56.950 "snapshot": false, 00:07:56.950 "clone": false, 00:07:56.950 "esnap_clone": false 00:07:56.950 } 00:07:56.950 } 00:07:56.950 } 00:07:56.950 ] 00:07:57.211 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:57.211 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:57.211 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:57.211 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:57.211 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:57.211 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:57.471 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:57.471 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:57.471 [2024-10-16 06:50:56.935915] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:57.471 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:57.471 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:57.471 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:57.471 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.471 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.472 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.732 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.732 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.732 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.732 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.732 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:57.732 06:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:57.732 request: 00:07:57.732 { 00:07:57.732 "uuid": "1a65e2c9-a303-408f-ba32-b7bbe5eb540f", 00:07:57.732 "method": "bdev_lvol_get_lvstores", 00:07:57.732 "req_id": 1 00:07:57.732 } 00:07:57.732 Got JSON-RPC error response 00:07:57.732 response: 00:07:57.732 { 00:07:57.732 "code": -19, 00:07:57.732 "message": "No such device" 00:07:57.732 } 00:07:57.732 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:57.732 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.732 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:57.732 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.732 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:57.992 aio_bdev 00:07:57.992 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ba8f9982-7cc6-4e33-aa14-5d681a85c39e 00:07:57.993 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ba8f9982-7cc6-4e33-aa14-5d681a85c39e 00:07:57.993 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:57.993 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:57.993 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:57.993 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:57.993 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:57.993 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ba8f9982-7cc6-4e33-aa14-5d681a85c39e -t 2000 00:07:58.253 [ 00:07:58.253 { 00:07:58.253 "name": "ba8f9982-7cc6-4e33-aa14-5d681a85c39e", 00:07:58.253 "aliases": [ 00:07:58.253 "lvs/lvol" 00:07:58.253 ], 00:07:58.253 "product_name": "Logical Volume", 00:07:58.253 "block_size": 4096, 00:07:58.253 "num_blocks": 38912, 00:07:58.253 "uuid": "ba8f9982-7cc6-4e33-aa14-5d681a85c39e", 00:07:58.253 "assigned_rate_limits": { 00:07:58.253 "rw_ios_per_sec": 0, 00:07:58.253 "rw_mbytes_per_sec": 0, 00:07:58.253 "r_mbytes_per_sec": 0, 00:07:58.253 "w_mbytes_per_sec": 0 00:07:58.253 }, 00:07:58.253 "claimed": false, 00:07:58.253 "zoned": false, 00:07:58.253 "supported_io_types": { 00:07:58.253 "read": true, 00:07:58.253 "write": true, 00:07:58.253 "unmap": true, 00:07:58.253 "flush": false, 00:07:58.253 "reset": true, 00:07:58.253 "nvme_admin": false, 00:07:58.253 "nvme_io": false, 00:07:58.253 "nvme_io_md": false, 00:07:58.253 "write_zeroes": true, 00:07:58.253 "zcopy": false, 00:07:58.253 "get_zone_info": false, 00:07:58.253 "zone_management": false, 00:07:58.253 "zone_append": false, 00:07:58.253 "compare": false, 00:07:58.253 "compare_and_write": false, 00:07:58.253 "abort": false, 00:07:58.253 "seek_hole": true, 00:07:58.253 "seek_data": true, 00:07:58.253 "copy": false, 00:07:58.253 "nvme_iov_md": false 00:07:58.253 }, 00:07:58.253 "driver_specific": { 00:07:58.253 "lvol": { 00:07:58.253 "lvol_store_uuid": "1a65e2c9-a303-408f-ba32-b7bbe5eb540f", 00:07:58.253 "base_bdev": "aio_bdev", 00:07:58.253 "thin_provision": false, 00:07:58.253 "num_allocated_clusters": 38, 00:07:58.253 "snapshot": false, 00:07:58.253 "clone": false, 00:07:58.253 "esnap_clone": false 00:07:58.253 } 00:07:58.253 } 00:07:58.253 } 00:07:58.253 ] 00:07:58.253 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:58.253 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:58.253 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:58.514 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:58.514 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:58.514 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:58.514 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:58.514 06:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ba8f9982-7cc6-4e33-aa14-5d681a85c39e 00:07:58.774 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a65e2c9-a303-408f-ba32-b7bbe5eb540f 00:07:59.035 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.035 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.035 00:07:59.035 real 0m17.393s 00:07:59.035 user 0m45.908s 00:07:59.035 sys 0m2.930s 00:07:59.035 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.035 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:59.035 ************************************ 00:07:59.035 END TEST lvs_grow_dirty 00:07:59.035 ************************************ 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:59.295 nvmf_trace.0 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.295 rmmod nvme_tcp 00:07:59.295 rmmod nvme_fabrics 00:07:59.295 rmmod nvme_keyring 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2949936 ']' 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2949936 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2949936 ']' 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2949936 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2949936 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2949936' 00:07:59.295 killing process with pid 2949936 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2949936 00:07:59.295 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2949936 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.555 06:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.467 06:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.467 00:08:01.467 real 0m44.770s 00:08:01.467 user 1m8.077s 00:08:01.467 sys 0m10.369s 00:08:01.467 06:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.467 06:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.467 ************************************ 00:08:01.467 END TEST nvmf_lvs_grow 00:08:01.467 ************************************ 00:08:01.467 06:51:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:01.467 06:51:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.467 06:51:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.467 06:51:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.728 ************************************ 00:08:01.728 START TEST nvmf_bdev_io_wait 00:08:01.728 ************************************ 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:01.728 * Looking for test storage... 00:08:01.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:01.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.728 --rc genhtml_branch_coverage=1 00:08:01.728 --rc genhtml_function_coverage=1 00:08:01.728 --rc genhtml_legend=1 00:08:01.728 --rc geninfo_all_blocks=1 00:08:01.728 --rc geninfo_unexecuted_blocks=1 00:08:01.728 00:08:01.728 ' 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:01.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.728 --rc genhtml_branch_coverage=1 00:08:01.728 --rc genhtml_function_coverage=1 00:08:01.728 --rc genhtml_legend=1 00:08:01.728 --rc geninfo_all_blocks=1 00:08:01.728 --rc geninfo_unexecuted_blocks=1 00:08:01.728 00:08:01.728 ' 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:01.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.728 --rc genhtml_branch_coverage=1 00:08:01.728 --rc genhtml_function_coverage=1 00:08:01.728 --rc genhtml_legend=1 00:08:01.728 --rc geninfo_all_blocks=1 00:08:01.728 --rc geninfo_unexecuted_blocks=1 00:08:01.728 00:08:01.728 ' 00:08:01.728 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:01.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.729 --rc genhtml_branch_coverage=1 00:08:01.729 --rc genhtml_function_coverage=1 00:08:01.729 --rc genhtml_legend=1 00:08:01.729 --rc geninfo_all_blocks=1 00:08:01.729 --rc geninfo_unexecuted_blocks=1 00:08:01.729 00:08:01.729 ' 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.729 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.990 06:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:10.127 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:10.127 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.127 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:10.128 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:10.128 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:10.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:08:10.128 00:08:10.128 --- 10.0.0.2 ping statistics --- 00:08:10.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.128 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:08:10.128 00:08:10.128 --- 10.0.0.1 ping statistics --- 00:08:10.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.128 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2955120 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2955120 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2955120 ']' 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.128 06:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.128 [2024-10-16 06:51:08.811311] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:08:10.128 [2024-10-16 06:51:08.811381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.128 [2024-10-16 06:51:08.899011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.128 [2024-10-16 06:51:08.953397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.128 [2024-10-16 06:51:08.953443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.128 [2024-10-16 06:51:08.953452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.128 [2024-10-16 06:51:08.953459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.128 [2024-10-16 06:51:08.953466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.128 [2024-10-16 06:51:08.955880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.128 [2024-10-16 06:51:08.956094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.128 [2024-10-16 06:51:08.956094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.128 [2024-10-16 06:51:08.955948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.128 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.128 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:10.128 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:10.128 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.128 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.390 [2024-10-16 06:51:09.747180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.390 Malloc0 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.390 [2024-10-16 06:51:09.812715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2955659 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2955661 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:10.390 { 00:08:10.390 "params": { 00:08:10.390 "name": "Nvme$subsystem", 00:08:10.390 "trtype": "$TEST_TRANSPORT", 00:08:10.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.390 "adrfam": "ipv4", 00:08:10.390 "trsvcid": "$NVMF_PORT", 00:08:10.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.390 "hdgst": ${hdgst:-false}, 00:08:10.390 "ddgst": ${ddgst:-false} 00:08:10.390 }, 00:08:10.390 "method": "bdev_nvme_attach_controller" 00:08:10.390 } 00:08:10.390 EOF 00:08:10.390 )") 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2955664 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:10.390 { 00:08:10.390 "params": { 00:08:10.390 "name": "Nvme$subsystem", 00:08:10.390 "trtype": "$TEST_TRANSPORT", 00:08:10.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.390 "adrfam": "ipv4", 00:08:10.390 "trsvcid": "$NVMF_PORT", 00:08:10.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.390 "hdgst": ${hdgst:-false}, 00:08:10.390 "ddgst": ${ddgst:-false} 00:08:10.390 }, 00:08:10.390 "method": "bdev_nvme_attach_controller" 00:08:10.390 } 00:08:10.390 EOF 00:08:10.390 )") 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2955668 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:10.390 { 00:08:10.390 "params": { 00:08:10.390 "name": "Nvme$subsystem", 00:08:10.390 "trtype": "$TEST_TRANSPORT", 00:08:10.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.390 "adrfam": "ipv4", 00:08:10.390 "trsvcid": "$NVMF_PORT", 00:08:10.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.390 "hdgst": ${hdgst:-false}, 00:08:10.390 "ddgst": ${ddgst:-false} 00:08:10.390 }, 00:08:10.390 "method": "bdev_nvme_attach_controller" 00:08:10.390 } 00:08:10.390 EOF 00:08:10.390 )") 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:10.390 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:10.390 { 00:08:10.390 "params": { 00:08:10.390 "name": "Nvme$subsystem", 00:08:10.390 "trtype": "$TEST_TRANSPORT", 00:08:10.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.390 "adrfam": "ipv4", 00:08:10.390 "trsvcid": "$NVMF_PORT", 00:08:10.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.391 "hdgst": ${hdgst:-false}, 00:08:10.391 "ddgst": ${ddgst:-false} 00:08:10.391 }, 00:08:10.391 "method": "bdev_nvme_attach_controller" 00:08:10.391 } 00:08:10.391 EOF 00:08:10.391 )") 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2955659 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:10.391 "params": { 00:08:10.391 "name": "Nvme1", 00:08:10.391 "trtype": "tcp", 00:08:10.391 "traddr": "10.0.0.2", 00:08:10.391 "adrfam": "ipv4", 00:08:10.391 "trsvcid": "4420", 00:08:10.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.391 "hdgst": false, 00:08:10.391 "ddgst": false 00:08:10.391 }, 00:08:10.391 "method": "bdev_nvme_attach_controller" 00:08:10.391 }' 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:10.391 "params": { 00:08:10.391 "name": "Nvme1", 00:08:10.391 "trtype": "tcp", 00:08:10.391 "traddr": "10.0.0.2", 00:08:10.391 "adrfam": "ipv4", 00:08:10.391 "trsvcid": "4420", 00:08:10.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.391 "hdgst": false, 00:08:10.391 "ddgst": false 00:08:10.391 }, 00:08:10.391 "method": "bdev_nvme_attach_controller" 00:08:10.391 }' 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:10.391 "params": { 00:08:10.391 "name": "Nvme1", 00:08:10.391 "trtype": "tcp", 00:08:10.391 "traddr": "10.0.0.2", 00:08:10.391 "adrfam": "ipv4", 00:08:10.391 "trsvcid": "4420", 00:08:10.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.391 "hdgst": false, 00:08:10.391 "ddgst": false 00:08:10.391 }, 00:08:10.391 "method": "bdev_nvme_attach_controller" 00:08:10.391 }' 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:10.391 06:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:10.391 "params": { 00:08:10.391 "name": "Nvme1", 00:08:10.391 "trtype": "tcp", 00:08:10.391 "traddr": "10.0.0.2", 00:08:10.391 "adrfam": "ipv4", 00:08:10.391 "trsvcid": "4420", 00:08:10.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.391 "hdgst": false, 00:08:10.391 "ddgst": false 00:08:10.391 }, 00:08:10.391 "method": "bdev_nvme_attach_controller" 00:08:10.391 }' 00:08:10.391 [2024-10-16 06:51:09.872271] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:08:10.391 [2024-10-16 06:51:09.872273] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:08:10.391 [2024-10-16 06:51:09.872340] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:10.391 [2024-10-16 06:51:09.872341] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:10.391 [2024-10-16 06:51:09.875326] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:08:10.391 [2024-10-16 06:51:09.875396] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:10.391 [2024-10-16 06:51:09.880157] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:08:10.391 [2024-10-16 06:51:09.880226] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:10.651 [2024-10-16 06:51:10.092581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.651 [2024-10-16 06:51:10.132481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:10.912 [2024-10-16 06:51:10.185750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.913 [2024-10-16 06:51:10.226884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:10.913 [2024-10-16 06:51:10.237822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.913 [2024-10-16 06:51:10.277624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:10.913 [2024-10-16 06:51:10.332820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.913 [2024-10-16 06:51:10.374508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:11.173 Running I/O for 1 seconds... 00:08:11.173 Running I/O for 1 seconds... 00:08:11.173 Running I/O for 1 seconds... 00:08:11.173 Running I/O for 1 seconds... 00:08:12.117 188280.00 IOPS, 735.47 MiB/s [2024-10-16T04:51:11.616Z] 7547.00 IOPS, 29.48 MiB/s 00:08:12.117 Latency(us) 00:08:12.117 [2024-10-16T04:51:11.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.117 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:12.117 Nvme1n1 : 1.00 187904.49 734.00 0.00 0.00 677.69 300.37 1979.73 00:08:12.117 [2024-10-16T04:51:11.616Z] =================================================================================================================== 00:08:12.117 [2024-10-16T04:51:11.616Z] Total : 187904.49 734.00 0.00 0.00 677.69 300.37 1979.73 00:08:12.117 00:08:12.117 Latency(us) 00:08:12.117 [2024-10-16T04:51:11.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.117 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:12.117 Nvme1n1 : 1.02 7560.09 29.53 0.00 0.00 16785.07 7208.96 29491.20 00:08:12.117 [2024-10-16T04:51:11.616Z] =================================================================================================================== 00:08:12.117 [2024-10-16T04:51:11.616Z] Total : 7560.09 29.53 0.00 0.00 16785.07 7208.96 29491.20 00:08:12.117 10461.00 IOPS, 40.86 MiB/s 00:08:12.117 Latency(us) 00:08:12.117 [2024-10-16T04:51:11.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.117 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:12.117 Nvme1n1 : 1.01 10503.35 41.03 0.00 0.00 12133.62 6826.67 23702.19 00:08:12.117 [2024-10-16T04:51:11.616Z] =================================================================================================================== 00:08:12.117 [2024-10-16T04:51:11.616Z] Total : 10503.35 41.03 0.00 0.00 12133.62 6826.67 23702.19 00:08:12.117 7011.00 IOPS, 27.39 MiB/s 00:08:12.117 Latency(us) 00:08:12.117 [2024-10-16T04:51:11.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.117 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:12.117 Nvme1n1 : 1.01 7127.08 27.84 0.00 0.00 17904.84 4587.52 41724.59 00:08:12.117 [2024-10-16T04:51:11.616Z] =================================================================================================================== 00:08:12.117 [2024-10-16T04:51:11.616Z] Total : 7127.08 27.84 0.00 0.00 17904.84 4587.52 41724.59 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2955661 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2955664 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2955668 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.378 rmmod nvme_tcp 00:08:12.378 rmmod nvme_fabrics 00:08:12.378 rmmod nvme_keyring 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2955120 ']' 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2955120 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2955120 ']' 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2955120 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2955120 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.378 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.379 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2955120' 00:08:12.379 killing process with pid 2955120 00:08:12.379 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2955120 00:08:12.379 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2955120 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.640 06:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.186 00:08:15.186 real 0m13.071s 00:08:15.186 user 0m19.446s 00:08:15.186 sys 0m7.455s 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.186 ************************************ 00:08:15.186 END TEST nvmf_bdev_io_wait 00:08:15.186 ************************************ 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.186 ************************************ 00:08:15.186 START TEST nvmf_queue_depth 00:08:15.186 ************************************ 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:15.186 * Looking for test storage... 00:08:15.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:15.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.186 --rc genhtml_branch_coverage=1 00:08:15.186 --rc genhtml_function_coverage=1 00:08:15.186 --rc genhtml_legend=1 00:08:15.186 --rc geninfo_all_blocks=1 00:08:15.186 --rc geninfo_unexecuted_blocks=1 00:08:15.186 00:08:15.186 ' 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:15.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.186 --rc genhtml_branch_coverage=1 00:08:15.186 --rc genhtml_function_coverage=1 00:08:15.186 --rc genhtml_legend=1 00:08:15.186 --rc geninfo_all_blocks=1 00:08:15.186 --rc geninfo_unexecuted_blocks=1 00:08:15.186 00:08:15.186 ' 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:15.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.186 --rc genhtml_branch_coverage=1 00:08:15.186 --rc genhtml_function_coverage=1 00:08:15.186 --rc genhtml_legend=1 00:08:15.186 --rc geninfo_all_blocks=1 00:08:15.186 --rc geninfo_unexecuted_blocks=1 00:08:15.186 00:08:15.186 ' 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:15.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.186 --rc genhtml_branch_coverage=1 00:08:15.186 --rc genhtml_function_coverage=1 00:08:15.186 --rc genhtml_legend=1 00:08:15.186 --rc geninfo_all_blocks=1 00:08:15.186 --rc geninfo_unexecuted_blocks=1 00:08:15.186 00:08:15.186 ' 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.186 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.187 06:51:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:23.330 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:23.330 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:23.330 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.330 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:23.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:08:23.331 00:08:23.331 --- 10.0.0.2 ping statistics --- 00:08:23.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.331 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:08:23.331 00:08:23.331 --- 10.0.0.1 ping statistics --- 00:08:23.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.331 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2960508 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2960508 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2960508 ']' 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.331 06:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 [2024-10-16 06:51:21.985323] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:08:23.331 [2024-10-16 06:51:21.985393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.331 [2024-10-16 06:51:22.060369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.331 [2024-10-16 06:51:22.112258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.331 [2024-10-16 06:51:22.112308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.331 [2024-10-16 06:51:22.112316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.331 [2024-10-16 06:51:22.112324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.331 [2024-10-16 06:51:22.112330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.331 [2024-10-16 06:51:22.113096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.331 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.331 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:23.331 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:23.331 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.331 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.593 [2024-10-16 06:51:22.841456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.593 Malloc0 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.593 [2024-10-16 06:51:22.902768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2960677 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2960677 /var/tmp/bdevperf.sock 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2960677 ']' 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.593 06:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.593 [2024-10-16 06:51:22.959997] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:08:23.593 [2024-10-16 06:51:22.960060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960677 ] 00:08:23.593 [2024-10-16 06:51:23.040186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.593 [2024-10-16 06:51:23.093324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.537 06:51:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.537 06:51:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:24.537 06:51:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:24.537 06:51:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.537 06:51:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.537 NVMe0n1 00:08:24.537 06:51:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.537 06:51:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:24.537 Running I/O for 10 seconds... 00:08:26.866 8206.00 IOPS, 32.05 MiB/s [2024-10-16T04:51:27.306Z] 8694.00 IOPS, 33.96 MiB/s [2024-10-16T04:51:28.245Z] 9558.33 IOPS, 37.34 MiB/s [2024-10-16T04:51:29.184Z] 10477.75 IOPS, 40.93 MiB/s [2024-10-16T04:51:30.123Z] 11050.40 IOPS, 43.17 MiB/s [2024-10-16T04:51:31.063Z] 11434.67 IOPS, 44.67 MiB/s [2024-10-16T04:51:32.003Z] 11718.00 IOPS, 45.77 MiB/s [2024-10-16T04:51:33.385Z] 11978.12 IOPS, 46.79 MiB/s [2024-10-16T04:51:34.326Z] 12166.33 IOPS, 47.52 MiB/s [2024-10-16T04:51:34.326Z] 12288.30 IOPS, 48.00 MiB/s 00:08:34.827 Latency(us) 00:08:34.827 [2024-10-16T04:51:34.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.827 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:34.827 Verification LBA range: start 0x0 length 0x4000 00:08:34.827 NVMe0n1 : 10.04 12333.52 48.18 0.00 0.00 82760.83 7809.71 76458.67 00:08:34.827 [2024-10-16T04:51:34.326Z] =================================================================================================================== 00:08:34.827 [2024-10-16T04:51:34.326Z] Total : 12333.52 48.18 0.00 0.00 82760.83 7809.71 76458.67 00:08:34.827 { 00:08:34.827 "results": [ 00:08:34.827 { 00:08:34.827 "job": "NVMe0n1", 00:08:34.827 "core_mask": "0x1", 00:08:34.827 "workload": "verify", 00:08:34.827 "status": "finished", 00:08:34.827 "verify_range": { 00:08:34.827 "start": 0, 00:08:34.827 "length": 16384 00:08:34.827 }, 00:08:34.827 "queue_depth": 1024, 00:08:34.827 "io_size": 4096, 00:08:34.827 "runtime": 10.042715, 00:08:34.827 "iops": 12333.517380509154, 00:08:34.827 "mibps": 48.177802267613885, 00:08:34.827 "io_failed": 0, 00:08:34.827 "io_timeout": 0, 00:08:34.827 "avg_latency_us": 82760.83047262277, 00:08:34.827 "min_latency_us": 7809.706666666667, 00:08:34.827 "max_latency_us": 76458.66666666667 00:08:34.827 } 00:08:34.827 ], 00:08:34.827 "core_count": 1 00:08:34.827 } 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2960677 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2960677 ']' 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2960677 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2960677 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2960677' 00:08:34.827 killing process with pid 2960677 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2960677 00:08:34.827 Received shutdown signal, test time was about 10.000000 seconds 00:08:34.827 00:08:34.827 Latency(us) 00:08:34.827 [2024-10-16T04:51:34.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.827 [2024-10-16T04:51:34.326Z] =================================================================================================================== 00:08:34.827 [2024-10-16T04:51:34.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2960677 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.827 rmmod nvme_tcp 00:08:34.827 rmmod nvme_fabrics 00:08:34.827 rmmod nvme_keyring 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2960508 ']' 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2960508 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2960508 ']' 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2960508 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.827 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2960508 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2960508' 00:08:35.087 killing process with pid 2960508 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2960508 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2960508 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.087 06:51:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.131 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.131 00:08:37.131 real 0m22.396s 00:08:37.131 user 0m25.582s 00:08:37.131 sys 0m7.012s 00:08:37.131 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.131 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.131 ************************************ 00:08:37.131 END TEST nvmf_queue_depth 00:08:37.131 ************************************ 00:08:37.131 06:51:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:37.131 06:51:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.131 06:51:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.131 06:51:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.393 ************************************ 00:08:37.393 START TEST nvmf_target_multipath 00:08:37.394 ************************************ 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:37.394 * Looking for test storage... 00:08:37.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:37.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.394 --rc genhtml_branch_coverage=1 00:08:37.394 --rc genhtml_function_coverage=1 00:08:37.394 --rc genhtml_legend=1 00:08:37.394 --rc geninfo_all_blocks=1 00:08:37.394 --rc geninfo_unexecuted_blocks=1 00:08:37.394 00:08:37.394 ' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:37.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.394 --rc genhtml_branch_coverage=1 00:08:37.394 --rc genhtml_function_coverage=1 00:08:37.394 --rc genhtml_legend=1 00:08:37.394 --rc geninfo_all_blocks=1 00:08:37.394 --rc geninfo_unexecuted_blocks=1 00:08:37.394 00:08:37.394 ' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:37.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.394 --rc genhtml_branch_coverage=1 00:08:37.394 --rc genhtml_function_coverage=1 00:08:37.394 --rc genhtml_legend=1 00:08:37.394 --rc geninfo_all_blocks=1 00:08:37.394 --rc geninfo_unexecuted_blocks=1 00:08:37.394 00:08:37.394 ' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:37.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.394 --rc genhtml_branch_coverage=1 00:08:37.394 --rc genhtml_function_coverage=1 00:08:37.394 --rc genhtml_legend=1 00:08:37.394 --rc geninfo_all_blocks=1 00:08:37.394 --rc geninfo_unexecuted_blocks=1 00:08:37.394 00:08:37.394 ' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.394 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.395 06:51:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:45.535 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:45.535 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:45.535 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.535 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:45.536 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:08:45.536 00:08:45.536 --- 10.0.0.2 ping statistics --- 00:08:45.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.536 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:08:45.536 00:08:45.536 --- 10.0.0.1 ping statistics --- 00:08:45.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.536 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:45.536 only one NIC for nvmf test 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.536 rmmod nvme_tcp 00:08:45.536 rmmod nvme_fabrics 00:08:45.536 rmmod nvme_keyring 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.536 06:51:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.445 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.445 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:47.445 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.446 00:08:47.446 real 0m9.983s 00:08:47.446 user 0m2.158s 00:08:47.446 sys 0m5.771s 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:47.446 ************************************ 00:08:47.446 END TEST nvmf_target_multipath 00:08:47.446 ************************************ 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.446 ************************************ 00:08:47.446 START TEST nvmf_zcopy 00:08:47.446 ************************************ 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:47.446 * Looking for test storage... 00:08:47.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:47.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.446 --rc genhtml_branch_coverage=1 00:08:47.446 --rc genhtml_function_coverage=1 00:08:47.446 --rc genhtml_legend=1 00:08:47.446 --rc geninfo_all_blocks=1 00:08:47.446 --rc geninfo_unexecuted_blocks=1 00:08:47.446 00:08:47.446 ' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:47.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.446 --rc genhtml_branch_coverage=1 00:08:47.446 --rc genhtml_function_coverage=1 00:08:47.446 --rc genhtml_legend=1 00:08:47.446 --rc geninfo_all_blocks=1 00:08:47.446 --rc geninfo_unexecuted_blocks=1 00:08:47.446 00:08:47.446 ' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:47.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.446 --rc genhtml_branch_coverage=1 00:08:47.446 --rc genhtml_function_coverage=1 00:08:47.446 --rc genhtml_legend=1 00:08:47.446 --rc geninfo_all_blocks=1 00:08:47.446 --rc geninfo_unexecuted_blocks=1 00:08:47.446 00:08:47.446 ' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:47.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.446 --rc genhtml_branch_coverage=1 00:08:47.446 --rc genhtml_function_coverage=1 00:08:47.446 --rc genhtml_legend=1 00:08:47.446 --rc geninfo_all_blocks=1 00:08:47.446 --rc geninfo_unexecuted_blocks=1 00:08:47.446 00:08:47.446 ' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.446 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.447 06:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:55.586 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:55.586 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:55.586 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:55.586 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.586 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:08:55.587 00:08:55.587 --- 10.0.0.2 ping statistics --- 00:08:55.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.587 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:08:55.587 00:08:55.587 --- 10.0.0.1 ping statistics --- 00:08:55.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.587 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2971386 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2971386 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2971386 ']' 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.587 06:51:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.587 [2024-10-16 06:51:54.507053] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:08:55.587 [2024-10-16 06:51:54.507120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.587 [2024-10-16 06:51:54.596799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.587 [2024-10-16 06:51:54.649443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.587 [2024-10-16 06:51:54.649488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.587 [2024-10-16 06:51:54.649497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.587 [2024-10-16 06:51:54.649504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.587 [2024-10-16 06:51:54.649510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.587 [2024-10-16 06:51:54.650183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.847 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.847 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:55.847 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:55.847 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.847 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.109 [2024-10-16 06:51:55.374019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.109 [2024-10-16 06:51:55.398258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.109 malloc0 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:56.109 { 00:08:56.109 "params": { 00:08:56.109 "name": "Nvme$subsystem", 00:08:56.109 "trtype": "$TEST_TRANSPORT", 00:08:56.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.109 "adrfam": "ipv4", 00:08:56.109 "trsvcid": "$NVMF_PORT", 00:08:56.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.109 "hdgst": ${hdgst:-false}, 00:08:56.109 "ddgst": ${ddgst:-false} 00:08:56.109 }, 00:08:56.109 "method": "bdev_nvme_attach_controller" 00:08:56.109 } 00:08:56.109 EOF 00:08:56.109 )") 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:56.109 06:51:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:56.109 "params": { 00:08:56.109 "name": "Nvme1", 00:08:56.109 "trtype": "tcp", 00:08:56.109 "traddr": "10.0.0.2", 00:08:56.109 "adrfam": "ipv4", 00:08:56.109 "trsvcid": "4420", 00:08:56.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:56.109 "hdgst": false, 00:08:56.109 "ddgst": false 00:08:56.109 }, 00:08:56.109 "method": "bdev_nvme_attach_controller" 00:08:56.109 }' 00:08:56.109 [2024-10-16 06:51:55.499961] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:08:56.109 [2024-10-16 06:51:55.500029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971731 ] 00:08:56.109 [2024-10-16 06:51:55.584456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.370 [2024-10-16 06:51:55.637090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.631 Running I/O for 10 seconds... 00:08:58.513 6843.00 IOPS, 53.46 MiB/s [2024-10-16T04:51:59.397Z] 8284.00 IOPS, 64.72 MiB/s [2024-10-16T04:52:00.340Z] 8787.33 IOPS, 68.65 MiB/s [2024-10-16T04:52:01.282Z] 9039.25 IOPS, 70.62 MiB/s [2024-10-16T04:52:02.224Z] 9186.40 IOPS, 71.77 MiB/s [2024-10-16T04:52:03.166Z] 9287.67 IOPS, 72.56 MiB/s [2024-10-16T04:52:04.108Z] 9359.57 IOPS, 73.12 MiB/s [2024-10-16T04:52:05.052Z] 9412.62 IOPS, 73.54 MiB/s [2024-10-16T04:52:05.993Z] 9453.33 IOPS, 73.85 MiB/s [2024-10-16T04:52:06.253Z] 9486.20 IOPS, 74.11 MiB/s 00:09:06.754 Latency(us) 00:09:06.754 [2024-10-16T04:52:06.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.754 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:06.754 Verification LBA range: start 0x0 length 0x1000 00:09:06.754 Nvme1n1 : 10.01 9491.02 74.15 0.00 0.00 13440.43 1071.79 27088.21 00:09:06.754 [2024-10-16T04:52:06.253Z] =================================================================================================================== 00:09:06.754 [2024-10-16T04:52:06.253Z] Total : 9491.02 74.15 0.00 0.00 13440.43 1071.79 27088.21 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2973748 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:06.754 { 00:09:06.754 "params": { 00:09:06.754 "name": "Nvme$subsystem", 00:09:06.754 "trtype": "$TEST_TRANSPORT", 00:09:06.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.754 "adrfam": "ipv4", 00:09:06.754 "trsvcid": "$NVMF_PORT", 00:09:06.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.754 "hdgst": ${hdgst:-false}, 00:09:06.754 "ddgst": ${ddgst:-false} 00:09:06.754 }, 00:09:06.754 "method": "bdev_nvme_attach_controller" 00:09:06.754 } 00:09:06.754 EOF 00:09:06.754 )") 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:06.754 [2024-10-16 06:52:06.107777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.754 [2024-10-16 06:52:06.107806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:06.754 06:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:06.754 "params": { 00:09:06.754 "name": "Nvme1", 00:09:06.754 "trtype": "tcp", 00:09:06.754 "traddr": "10.0.0.2", 00:09:06.754 "adrfam": "ipv4", 00:09:06.754 "trsvcid": "4420", 00:09:06.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.754 "hdgst": false, 00:09:06.754 "ddgst": false 00:09:06.754 }, 00:09:06.754 "method": "bdev_nvme_attach_controller" 00:09:06.754 }' 00:09:06.754 [2024-10-16 06:52:06.115768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.754 [2024-10-16 06:52:06.115778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.754 [2024-10-16 06:52:06.123787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.754 [2024-10-16 06:52:06.123795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.754 [2024-10-16 06:52:06.135817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.754 [2024-10-16 06:52:06.135824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.754 [2024-10-16 06:52:06.147850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.754 [2024-10-16 06:52:06.147857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.754 [2024-10-16 06:52:06.152215] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:09:06.755 [2024-10-16 06:52:06.152263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2973748 ] 00:09:06.755 [2024-10-16 06:52:06.159881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.159889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.755 [2024-10-16 06:52:06.171909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.171916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.755 [2024-10-16 06:52:06.183940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.183948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.755 [2024-10-16 06:52:06.195971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.195978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.755 [2024-10-16 06:52:06.208001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.208008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.755 [2024-10-16 06:52:06.216022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.216028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.755 [2024-10-16 06:52:06.224043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.224049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.755 [2024-10-16 06:52:06.227812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.755 [2024-10-16 06:52:06.232063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.232070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.755 [2024-10-16 06:52:06.240084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.240091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.755 [2024-10-16 06:52:06.248103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.755 [2024-10-16 06:52:06.248110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.256123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.256133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.257763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.016 [2024-10-16 06:52:06.264143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.264149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.272167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.272176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.280187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.280198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.288206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.288215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.296224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.296234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.304244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.304253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.312263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.312270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.320284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.320290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.328306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.328314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.336334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.336349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.344358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.344367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.352376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.352384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.360399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.360408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.368423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.368433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.376441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.376448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.384462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.384468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.392483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.392490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.400505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.400511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.408527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.408537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.416565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.416575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.424584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.424591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.432605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.432611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.440627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.440634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.448648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.448654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.456669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.456678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.464689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.464697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.472709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.016 [2024-10-16 06:52:06.472716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.016 [2024-10-16 06:52:06.480730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.017 [2024-10-16 06:52:06.480738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.017 [2024-10-16 06:52:06.488750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.017 [2024-10-16 06:52:06.488758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.017 [2024-10-16 06:52:06.496771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.017 [2024-10-16 06:52:06.496779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.017 [2024-10-16 06:52:06.504790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.017 [2024-10-16 06:52:06.504798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.017 [2024-10-16 06:52:06.513521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.017 [2024-10-16 06:52:06.513535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.277 [2024-10-16 06:52:06.520833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.277 [2024-10-16 06:52:06.520848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.277 Running I/O for 5 seconds... 00:09:07.278 [2024-10-16 06:52:06.528857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.528866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.540274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.540290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.548151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.548168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.556965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.556980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.565780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.565799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.575045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.575061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.584051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.584067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.592946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.592962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.601743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.601758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.610424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.610439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.619170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.619186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.628093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.628108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.636826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.636841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.645217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.645232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.653994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.654009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.662563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.662578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.671440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.671455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.680249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.680263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.688992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.689006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.697618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.697633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.707072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.707087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.715997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.716012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.724350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.724365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.733162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.733180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.742507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.742523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.751231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.751245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.760014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.760029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.768918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.768933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.278 [2024-10-16 06:52:06.777531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.278 [2024-10-16 06:52:06.777546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.786225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.786241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.795066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.795081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.804055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.804070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.812421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.812435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.821360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.821375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.829961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.829976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.842903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.842918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.855913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.855929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.868853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.868868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.881859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.881874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.894753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.894768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.908435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.908450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.921265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.921280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.934140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.934155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.947286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.947301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.959592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.959607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.972519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.972534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.984981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.540 [2024-10-16 06:52:06.984996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.540 [2024-10-16 06:52:06.998009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-16 06:52:06.998024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-16 06:52:07.011575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-16 06:52:07.011591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-16 06:52:07.024660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-16 06:52:07.024675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-16 06:52:07.037633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-16 06:52:07.037647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.050358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.050374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.063394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.063409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.076224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.076239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.089009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.089024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.102669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.102684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.115462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.115477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.128742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.128757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.141529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.141544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.154457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.154472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.167261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.167276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.180286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.180302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.193150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.193165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.205804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.205818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.218775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.218790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.801 [2024-10-16 06:52:07.231707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.801 [2024-10-16 06:52:07.231721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.802 [2024-10-16 06:52:07.244700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.802 [2024-10-16 06:52:07.244714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.802 [2024-10-16 06:52:07.258029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.802 [2024-10-16 06:52:07.258044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.802 [2024-10-16 06:52:07.270681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.802 [2024-10-16 06:52:07.270696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.802 [2024-10-16 06:52:07.283733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.802 [2024-10-16 06:52:07.283747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.802 [2024-10-16 06:52:07.296524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.802 [2024-10-16 06:52:07.296539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.309701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.309716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.322543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.322557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.335532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.335546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.348257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.348272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.361142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.361157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.374034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.374049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.387016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.387030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.399812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.399827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.412797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.412812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.425668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.425684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.438409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.438423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.451711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.451725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.464562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.464577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.477850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.477865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.490875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.490890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.504071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.504086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.516903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.516917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 19429.00 IOPS, 151.79 MiB/s [2024-10-16T04:52:07.561Z] [2024-10-16 06:52:07.529566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.529581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.542955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.542970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.062 [2024-10-16 06:52:07.555691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.062 [2024-10-16 06:52:07.555705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.322 [2024-10-16 06:52:07.568826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.322 [2024-10-16 06:52:07.568840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.322 [2024-10-16 06:52:07.581783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.322 [2024-10-16 06:52:07.581798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.322 [2024-10-16 06:52:07.595045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.322 [2024-10-16 06:52:07.595059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.322 [2024-10-16 06:52:07.608000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.322 [2024-10-16 06:52:07.608015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.322 [2024-10-16 06:52:07.621000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.322 [2024-10-16 06:52:07.621015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.322 [2024-10-16 06:52:07.633666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.322 [2024-10-16 06:52:07.633680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.322 [2024-10-16 06:52:07.647146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.647160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.660230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.660249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.673344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.673359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.686258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.686273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.699455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.699470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.712331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.712345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.725166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.725180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.737692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.737706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.750960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.750975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.763971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.763985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.776808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.776823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.789899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.789913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.802527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.802542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.323 [2024-10-16 06:52:07.815549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.323 [2024-10-16 06:52:07.815563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.828635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.828650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.841720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.841735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.854828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.854846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.868092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.868107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.880960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.880975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.894067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.894082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.907528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.907547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.920658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.920672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.933423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.933437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.946257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.946271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.958763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.958778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.971774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.971789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.984485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.583 [2024-10-16 06:52:07.984500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.583 [2024-10-16 06:52:07.997301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.584 [2024-10-16 06:52:07.997316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.584 [2024-10-16 06:52:08.009740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.584 [2024-10-16 06:52:08.009755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.584 [2024-10-16 06:52:08.022891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.584 [2024-10-16 06:52:08.022905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.584 [2024-10-16 06:52:08.035634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.584 [2024-10-16 06:52:08.035649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.584 [2024-10-16 06:52:08.048730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.584 [2024-10-16 06:52:08.048744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.584 [2024-10-16 06:52:08.061582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.584 [2024-10-16 06:52:08.061596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.584 [2024-10-16 06:52:08.074310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.584 [2024-10-16 06:52:08.074325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.844 [2024-10-16 06:52:08.087340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.087355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.100350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.100365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.113236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.113251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.126333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.126348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.139581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.139595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.152011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.152029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.164763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.164778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.177869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.177884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.191099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.191114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.204287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.204302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.217566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.217581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.230460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.230475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.243695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.243711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.256256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.256271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.269479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.269494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.282121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.282136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.295220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.295235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.307533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.307548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.320604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.320620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.845 [2024-10-16 06:52:08.334277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.845 [2024-10-16 06:52:08.334292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.347077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.347093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.360496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.360511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.373370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.373385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.386987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.387002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.399917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.399936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.412519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.412534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.425477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.425492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.438368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.438383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.451308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.451323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.464603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.464618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.477421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.477437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.490345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.490361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.503284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.503300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.516128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.516143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.528581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.528597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 19533.50 IOPS, 152.61 MiB/s [2024-10-16T04:52:08.605Z] [2024-10-16 06:52:08.541774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.541789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.554657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.554673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.567996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.568011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.580968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.580982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.106 [2024-10-16 06:52:08.593716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.106 [2024-10-16 06:52:08.593731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.606940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.606955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.619847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.619863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.632549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.632564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.645331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.645346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.658475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.658490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.671135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.671150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.684617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.684632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.697753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.697767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.709924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.709938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.722610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.722625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.735673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.735688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.748544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.748559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.761317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.761332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.774479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.774494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.787566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.787581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.800640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.800655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.813556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.813571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.826694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.826709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.839698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.839713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.852279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.852294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.367 [2024-10-16 06:52:08.865568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.367 [2024-10-16 06:52:08.865583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.878582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.878598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.891514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.891529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.904996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.905011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.917973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.917988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.930740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.930754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.943625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.943640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.957001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.957016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.969991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.970006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.983005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.983020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:08.995986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:08.996000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.009111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.009126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.022497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.022511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.035691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.035705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.048386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.048401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.061225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.061240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.074331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.074346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.086908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.086922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.099877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.099892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.112737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.112752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.628 [2024-10-16 06:52:09.125492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.628 [2024-10-16 06:52:09.125510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.138476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.138491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.151729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.151743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.164643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.164658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.177560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.177575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.190676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.190691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.203352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.203366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.216324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.216339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.228801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.228815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.241821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.241836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.255321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.255336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.267918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.267932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.281071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.281085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.294109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.294124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.306600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.306614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.319823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.319838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.332738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.332752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.345455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.345469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.358454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.358469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.371356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.371374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.889 [2024-10-16 06:52:09.384371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.889 [2024-10-16 06:52:09.384386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.397293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.397308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.410389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.410404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.423349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.423364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.436510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.436525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.448988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.449002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.461996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.462010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.474950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.474965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.487716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.487732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.500847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.500862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.513375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.513390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.526385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.526399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 19567.00 IOPS, 152.87 MiB/s [2024-10-16T04:52:09.649Z] [2024-10-16 06:52:09.539527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.539542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.552363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.552377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.565046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.565061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.578130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.578145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.590985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.591000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.603604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.603619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.617369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.617387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.630180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.630195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.150 [2024-10-16 06:52:09.643254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.150 [2024-10-16 06:52:09.643269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.656449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.656464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.669264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.669278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.682349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.682364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.695496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.695510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.708237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.708251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.720963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.720978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.734085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.734100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.746997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.747011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.759852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.759867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.773044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.773058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.785464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.785478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.798427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.798442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.811619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.811634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.824978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.824992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.837601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.837615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.850977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.850991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.864166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.864181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.876938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.876953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.889938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.889953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.412 [2024-10-16 06:52:09.902701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.412 [2024-10-16 06:52:09.902717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:09.915724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:09.915740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:09.928698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:09.928714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:09.941710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:09.941725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:09.954559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:09.954575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:09.967061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:09.967076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:09.979681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:09.979696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:09.993084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:09.993099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.005900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.005918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.018370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.018386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.031496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.031511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.044670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.044685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.057592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.057608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.070820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.070836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.083856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.083872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.097092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.097108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.110338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.110353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.123535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.123550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.136502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.136517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.149478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.149493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.673 [2024-10-16 06:52:10.162445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.673 [2024-10-16 06:52:10.162460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.175475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.175491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.188685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.188701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.201880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.201896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.214587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.214603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.227568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.227583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.240531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.240546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.253223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.253239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.266207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.266222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.278933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.278948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.292359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.292374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.304671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.304686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.316585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.316600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.330018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.330034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.343043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.343058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.355903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.355918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.368731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.368747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.382008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.382023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.395047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.395062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.408316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.408330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.421058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.421073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-10-16 06:52:10.433980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-10-16 06:52:10.433995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.446715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.446731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.460094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.460110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.473405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.473419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.486428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.486443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.499205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.499220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.511994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.512009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.525853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.525868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 19591.75 IOPS, 153.06 MiB/s [2024-10-16T04:52:10.695Z] [2024-10-16 06:52:10.538675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.538690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.551935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.551950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.565253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.565268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.578176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.578190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.591818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.591840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.605048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.605064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.618049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.618065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.630909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.630924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.643519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.643534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.656285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.656299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.669337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.669352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.682422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.682437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-10-16 06:52:10.695502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-10-16 06:52:10.695517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.708677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.708692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.721654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.721668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.734457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.734472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.747816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.747831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.760607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.760621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.773586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.773601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.786526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.786541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.798945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.798960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.812485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.812500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.825110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.825125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.838278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.838297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.850804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.850819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.863675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.863690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.876567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.876581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.889724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.889738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.902749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.902764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.915487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.915502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.928563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.928577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.940769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.940784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-10-16 06:52:10.953882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-10-16 06:52:10.953897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:10.966849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:10.966863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:10.980059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:10.980074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:10.992846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:10.992861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.005970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.005985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.019174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.019189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.032377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.032391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.045085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.045099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.058013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.058028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.070818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.070832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.083833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.083855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.097092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.097107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.109798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.109813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.123133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.123147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.136088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.136102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.149204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.149218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.162152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.162167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.174887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.174902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.187964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.187979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.201207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.201222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.718 [2024-10-16 06:52:11.214272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.718 [2024-10-16 06:52:11.214287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.227125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.227140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.240458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.240472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.252867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.252881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.265731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.265745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.278816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.278831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.291865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.291880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.304685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.304700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.317743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.317758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.331042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.331061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.344024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.344038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.357358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.357373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.369720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.369735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.382884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.382899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.396166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.396181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.409560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.409574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.422637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.422651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.435566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.435580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.448609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.448624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.461756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.461771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.979 [2024-10-16 06:52:11.474936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.979 [2024-10-16 06:52:11.474951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.487757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.487772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.500628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.500643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.512861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.512875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.526399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.526413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 19613.80 IOPS, 153.23 MiB/s [2024-10-16T04:52:11.739Z] [2024-10-16 06:52:11.538917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.538931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 00:09:12.240 Latency(us) 00:09:12.240 [2024-10-16T04:52:11.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.240 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:12.240 Nvme1n1 : 5.01 19616.99 153.26 0.00 0.00 6519.00 3003.73 15291.73 00:09:12.240 [2024-10-16T04:52:11.739Z] =================================================================================================================== 00:09:12.240 [2024-10-16T04:52:11.739Z] Total : 19616.99 153.26 0.00 0.00 6519.00 3003.73 15291.73 00:09:12.240 [2024-10-16 06:52:11.548580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.548594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.560616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.560632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.572641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.572651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.584673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.584685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.596705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.596716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.608729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.608739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.620760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.620769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.632790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.632800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-10-16 06:52:11.644817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-10-16 06:52:11.644825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2973748) - No such process 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2973748 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.240 delay0 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.240 06:52:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:12.500 [2024-10-16 06:52:11.773000] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:20.723 Initializing NVMe Controllers 00:09:20.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:20.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:20.723 Initialization complete. Launching workers. 00:09:20.723 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 251, failed: 32909 00:09:20.723 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33043, failed to submit 117 00:09:20.723 success 32966, unsuccessful 77, failed 0 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.723 rmmod nvme_tcp 00:09:20.723 rmmod nvme_fabrics 00:09:20.723 rmmod nvme_keyring 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2971386 ']' 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2971386 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2971386 ']' 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2971386 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2971386 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2971386' 00:09:20.723 killing process with pid 2971386 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2971386 00:09:20.723 06:52:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2971386 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.723 06:52:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.665 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.665 00:09:21.665 real 0m34.403s 00:09:21.665 user 0m45.555s 00:09:21.665 sys 0m11.629s 00:09:21.665 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.665 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.665 ************************************ 00:09:21.665 END TEST nvmf_zcopy 00:09:21.665 ************************************ 00:09:21.665 06:52:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:21.665 06:52:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:21.665 06:52:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.665 06:52:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.926 ************************************ 00:09:21.926 START TEST nvmf_nmic 00:09:21.926 ************************************ 00:09:21.926 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:21.926 * Looking for test storage... 00:09:21.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.926 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.927 --rc genhtml_branch_coverage=1 00:09:21.927 --rc genhtml_function_coverage=1 00:09:21.927 --rc genhtml_legend=1 00:09:21.927 --rc geninfo_all_blocks=1 00:09:21.927 --rc geninfo_unexecuted_blocks=1 00:09:21.927 00:09:21.927 ' 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.927 --rc genhtml_branch_coverage=1 00:09:21.927 --rc genhtml_function_coverage=1 00:09:21.927 --rc genhtml_legend=1 00:09:21.927 --rc geninfo_all_blocks=1 00:09:21.927 --rc geninfo_unexecuted_blocks=1 00:09:21.927 00:09:21.927 ' 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.927 --rc genhtml_branch_coverage=1 00:09:21.927 --rc genhtml_function_coverage=1 00:09:21.927 --rc genhtml_legend=1 00:09:21.927 --rc geninfo_all_blocks=1 00:09:21.927 --rc geninfo_unexecuted_blocks=1 00:09:21.927 00:09:21.927 ' 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.927 --rc genhtml_branch_coverage=1 00:09:21.927 --rc genhtml_function_coverage=1 00:09:21.927 --rc genhtml_legend=1 00:09:21.927 --rc geninfo_all_blocks=1 00:09:21.927 --rc geninfo_unexecuted_blocks=1 00:09:21.927 00:09:21.927 ' 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.927 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.928 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.928 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.928 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.928 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.928 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.928 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.928 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:21.928 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.188 06:52:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:30.333 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:30.333 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:30.333 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:30.333 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:30.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:09:30.333 00:09:30.333 --- 10.0.0.2 ping statistics --- 00:09:30.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.333 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:09:30.333 00:09:30.333 --- 10.0.0.1 ping statistics --- 00:09:30.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.333 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:09:30.333 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2980443 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2980443 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2980443 ']' 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.334 06:52:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 [2024-10-16 06:52:28.810019] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:09:30.334 [2024-10-16 06:52:28.810086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.334 [2024-10-16 06:52:28.897808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.334 [2024-10-16 06:52:28.952159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.334 [2024-10-16 06:52:28.952216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.334 [2024-10-16 06:52:28.952227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.334 [2024-10-16 06:52:28.952235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.334 [2024-10-16 06:52:28.952241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.334 [2024-10-16 06:52:28.954678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.334 [2024-10-16 06:52:28.954871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.334 [2024-10-16 06:52:28.954983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.334 [2024-10-16 06:52:28.954983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 [2024-10-16 06:52:29.687523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 Malloc0 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 [2024-10-16 06:52:29.760795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:30.334 test case1: single bdev can't be used in multiple subsystems 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 [2024-10-16 06:52:29.796698] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:30.334 [2024-10-16 06:52:29.796723] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:30.334 [2024-10-16 06:52:29.796732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.334 request: 00:09:30.334 { 00:09:30.334 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:30.334 "namespace": { 00:09:30.334 "bdev_name": "Malloc0", 00:09:30.334 "no_auto_visible": false 00:09:30.334 }, 00:09:30.334 "method": "nvmf_subsystem_add_ns", 00:09:30.334 "req_id": 1 00:09:30.334 } 00:09:30.334 Got JSON-RPC error response 00:09:30.334 response: 00:09:30.334 { 00:09:30.334 "code": -32602, 00:09:30.334 "message": "Invalid parameters" 00:09:30.334 } 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:30.334 Adding namespace failed - expected result. 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:30.334 test case2: host connect to nvmf target in multiple paths 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 [2024-10-16 06:52:29.808894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.334 06:52:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:32.250 06:52:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:33.634 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.634 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:33.634 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.634 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:33.634 06:52:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:35.545 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:35.545 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:35.545 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.545 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:35.545 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.545 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:35.545 06:52:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:35.545 [global] 00:09:35.545 thread=1 00:09:35.545 invalidate=1 00:09:35.545 rw=write 00:09:35.545 time_based=1 00:09:35.545 runtime=1 00:09:35.545 ioengine=libaio 00:09:35.545 direct=1 00:09:35.545 bs=4096 00:09:35.545 iodepth=1 00:09:35.545 norandommap=0 00:09:35.545 numjobs=1 00:09:35.545 00:09:35.545 verify_dump=1 00:09:35.545 verify_backlog=512 00:09:35.545 verify_state_save=0 00:09:35.545 do_verify=1 00:09:35.545 verify=crc32c-intel 00:09:35.545 [job0] 00:09:35.545 filename=/dev/nvme0n1 00:09:35.545 Could not set queue depth (nvme0n1) 00:09:35.805 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.805 fio-3.35 00:09:35.805 Starting 1 thread 00:09:37.189 00:09:37.189 job0: (groupid=0, jobs=1): err= 0: pid=2981985: Wed Oct 16 06:52:36 2024 00:09:37.189 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:37.189 slat (nsec): min=7999, max=60357, avg=25638.76, stdev=3210.24 00:09:37.189 clat (usec): min=779, max=1272, avg=1080.58, stdev=74.71 00:09:37.189 lat (usec): min=804, max=1297, avg=1106.22, stdev=74.63 00:09:37.189 clat percentiles (usec): 00:09:37.189 | 1.00th=[ 881], 5.00th=[ 922], 10.00th=[ 979], 20.00th=[ 1037], 00:09:37.189 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:09:37.189 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:09:37.189 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:09:37.189 | 99.99th=[ 1270] 00:09:37.189 write: IOPS=685, BW=2741KiB/s (2807kB/s)(2744KiB/1001msec); 0 zone resets 00:09:37.189 slat (nsec): min=9581, max=68477, avg=28081.48, stdev=9882.13 00:09:37.189 clat (usec): min=271, max=791, avg=591.16, stdev=92.18 00:09:37.189 lat (usec): min=281, max=808, avg=619.24, stdev=96.56 00:09:37.189 clat percentiles (usec): 00:09:37.189 | 1.00th=[ 359], 5.00th=[ 416], 10.00th=[ 469], 20.00th=[ 510], 00:09:37.189 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:09:37.189 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 717], 00:09:37.189 | 99.00th=[ 766], 99.50th=[ 775], 99.90th=[ 791], 99.95th=[ 791], 00:09:37.189 | 99.99th=[ 791] 00:09:37.189 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:37.189 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:37.189 lat (usec) : 500=10.52%, 750=45.58%, 1000=6.93% 00:09:37.189 lat (msec) : 2=36.98% 00:09:37.189 cpu : usr=2.50%, sys=2.60%, ctx=1198, majf=0, minf=1 00:09:37.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.189 issued rwts: total=512,686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.189 00:09:37.189 Run status group 0 (all jobs): 00:09:37.189 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:37.189 WRITE: bw=2741KiB/s (2807kB/s), 2741KiB/s-2741KiB/s (2807kB/s-2807kB/s), io=2744KiB (2810kB), run=1001-1001msec 00:09:37.189 00:09:37.189 Disk stats (read/write): 00:09:37.189 nvme0n1: ios=562/522, merge=0/0, ticks=612/301, in_queue=913, util=93.79% 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.190 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.190 rmmod nvme_tcp 00:09:37.190 rmmod nvme_fabrics 00:09:37.190 rmmod nvme_keyring 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2980443 ']' 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2980443 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2980443 ']' 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2980443 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2980443 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2980443' 00:09:37.450 killing process with pid 2980443 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2980443 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2980443 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.450 06:52:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.999 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.999 00:09:39.999 real 0m17.779s 00:09:39.999 user 0m48.882s 00:09:39.999 sys 0m6.437s 00:09:39.999 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.999 06:52:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.999 ************************************ 00:09:39.999 END TEST nvmf_nmic 00:09:39.999 ************************************ 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.999 ************************************ 00:09:39.999 START TEST nvmf_fio_target 00:09:39.999 ************************************ 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:39.999 * Looking for test storage... 00:09:39.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.999 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:40.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.000 --rc genhtml_branch_coverage=1 00:09:40.000 --rc genhtml_function_coverage=1 00:09:40.000 --rc genhtml_legend=1 00:09:40.000 --rc geninfo_all_blocks=1 00:09:40.000 --rc geninfo_unexecuted_blocks=1 00:09:40.000 00:09:40.000 ' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:40.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.000 --rc genhtml_branch_coverage=1 00:09:40.000 --rc genhtml_function_coverage=1 00:09:40.000 --rc genhtml_legend=1 00:09:40.000 --rc geninfo_all_blocks=1 00:09:40.000 --rc geninfo_unexecuted_blocks=1 00:09:40.000 00:09:40.000 ' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:40.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.000 --rc genhtml_branch_coverage=1 00:09:40.000 --rc genhtml_function_coverage=1 00:09:40.000 --rc genhtml_legend=1 00:09:40.000 --rc geninfo_all_blocks=1 00:09:40.000 --rc geninfo_unexecuted_blocks=1 00:09:40.000 00:09:40.000 ' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:40.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.000 --rc genhtml_branch_coverage=1 00:09:40.000 --rc genhtml_function_coverage=1 00:09:40.000 --rc genhtml_legend=1 00:09:40.000 --rc geninfo_all_blocks=1 00:09:40.000 --rc geninfo_unexecuted_blocks=1 00:09:40.000 00:09:40.000 ' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:40.000 06:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:48.147 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:48.147 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:48.147 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:48.147 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:48.147 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:48.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:09:48.148 00:09:48.148 --- 10.0.0.2 ping statistics --- 00:09:48.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.148 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:09:48.148 00:09:48.148 --- 10.0.0.1 ping statistics --- 00:09:48.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.148 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2986520 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2986520 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2986520 ']' 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.148 06:52:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.148 [2024-10-16 06:52:46.864971] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:09:48.148 [2024-10-16 06:52:46.865038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.148 [2024-10-16 06:52:46.954588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.148 [2024-10-16 06:52:47.007857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.148 [2024-10-16 06:52:47.007910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.148 [2024-10-16 06:52:47.007919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.148 [2024-10-16 06:52:47.007926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.148 [2024-10-16 06:52:47.007932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.148 [2024-10-16 06:52:47.009939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.148 [2024-10-16 06:52:47.010118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.148 [2024-10-16 06:52:47.010275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.148 [2024-10-16 06:52:47.010277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.408 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.408 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:48.408 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:48.409 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:48.409 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.409 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.409 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.409 [2024-10-16 06:52:47.903045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.669 06:52:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.929 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:48.929 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.929 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:48.929 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.190 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:49.190 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.451 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:49.451 06:52:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:49.712 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.972 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:49.972 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.972 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:49.972 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.232 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:50.232 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:50.493 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:50.493 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:50.493 06:52:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:50.754 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:50.754 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:51.015 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.275 [2024-10-16 06:52:50.521295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.275 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:51.275 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:51.536 06:52:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:52.920 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:52.920 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:52.920 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.920 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:52.920 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:52.920 06:52:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:55.465 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:55.465 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:55.465 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.465 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:55.465 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.465 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:55.465 06:52:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:55.465 [global] 00:09:55.465 thread=1 00:09:55.465 invalidate=1 00:09:55.465 rw=write 00:09:55.465 time_based=1 00:09:55.465 runtime=1 00:09:55.465 ioengine=libaio 00:09:55.465 direct=1 00:09:55.465 bs=4096 00:09:55.465 iodepth=1 00:09:55.465 norandommap=0 00:09:55.465 numjobs=1 00:09:55.465 00:09:55.465 verify_dump=1 00:09:55.465 verify_backlog=512 00:09:55.465 verify_state_save=0 00:09:55.465 do_verify=1 00:09:55.465 verify=crc32c-intel 00:09:55.465 [job0] 00:09:55.465 filename=/dev/nvme0n1 00:09:55.465 [job1] 00:09:55.465 filename=/dev/nvme0n2 00:09:55.465 [job2] 00:09:55.465 filename=/dev/nvme0n3 00:09:55.465 [job3] 00:09:55.465 filename=/dev/nvme0n4 00:09:55.465 Could not set queue depth (nvme0n1) 00:09:55.465 Could not set queue depth (nvme0n2) 00:09:55.465 Could not set queue depth (nvme0n3) 00:09:55.465 Could not set queue depth (nvme0n4) 00:09:55.465 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.465 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.465 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.465 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.466 fio-3.35 00:09:55.466 Starting 4 threads 00:09:56.851 00:09:56.851 job0: (groupid=0, jobs=1): err= 0: pid=2988252: Wed Oct 16 06:52:56 2024 00:09:56.851 read: IOPS=505, BW=2024KiB/s (2072kB/s)(2052KiB/1014msec) 00:09:56.851 slat (nsec): min=6716, max=44593, avg=20427.01, stdev=8971.00 00:09:56.851 clat (usec): min=182, max=42245, avg=1157.29, stdev=4232.05 00:09:56.851 lat (usec): min=208, max=42271, avg=1177.72, stdev=4232.38 00:09:56.851 clat percentiles (usec): 00:09:56.851 | 1.00th=[ 412], 5.00th=[ 486], 10.00th=[ 545], 20.00th=[ 594], 00:09:56.851 | 30.00th=[ 644], 40.00th=[ 693], 50.00th=[ 725], 60.00th=[ 750], 00:09:56.851 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 873], 00:09:56.851 | 99.00th=[30016], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:56.851 | 99.99th=[42206] 00:09:56.851 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:09:56.851 slat (nsec): min=9402, max=68227, avg=19787.23, stdev=12059.71 00:09:56.851 clat (usec): min=114, max=1994, avg=369.41, stdev=109.28 00:09:56.851 lat (usec): min=124, max=2028, avg=389.20, stdev=114.79 00:09:56.851 clat percentiles (usec): 00:09:56.851 | 1.00th=[ 135], 5.00th=[ 198], 10.00th=[ 258], 20.00th=[ 285], 00:09:56.851 | 30.00th=[ 306], 40.00th=[ 334], 50.00th=[ 363], 60.00th=[ 396], 00:09:56.851 | 70.00th=[ 429], 80.00th=[ 457], 90.00th=[ 490], 95.00th=[ 523], 00:09:56.851 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 619], 99.95th=[ 1991], 00:09:56.851 | 99.99th=[ 1991] 00:09:56.851 bw ( KiB/s): min= 4096, max= 4096, per=34.41%, avg=4096.00, stdev= 0.00, samples=2 00:09:56.851 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:09:56.851 lat (usec) : 250=5.79%, 500=57.45%, 750=23.29%, 1000=12.88% 00:09:56.851 lat (msec) : 2=0.20%, 50=0.39% 00:09:56.851 cpu : usr=1.28%, sys=3.46%, ctx=1539, majf=0, minf=1 00:09:56.851 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.851 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.851 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.851 job1: (groupid=0, jobs=1): err= 0: pid=2988253: Wed Oct 16 06:52:56 2024 00:09:56.851 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:56.851 slat (nsec): min=7007, max=59742, avg=24016.16, stdev=7617.52 00:09:56.851 clat (usec): min=465, max=42517, avg=1397.85, stdev=5152.52 00:09:56.851 lat (usec): min=488, max=42549, avg=1421.87, stdev=5153.58 00:09:56.851 clat percentiles (usec): 00:09:56.851 | 1.00th=[ 515], 5.00th=[ 586], 10.00th=[ 635], 20.00th=[ 668], 00:09:56.851 | 30.00th=[ 709], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 783], 00:09:56.851 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 906], 00:09:56.851 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:09:56.851 | 99.99th=[42730] 00:09:56.851 write: IOPS=543, BW=2174KiB/s (2226kB/s)(2176KiB/1001msec); 0 zone resets 00:09:56.851 slat (nsec): min=9396, max=67234, avg=31741.39, stdev=9143.30 00:09:56.851 clat (usec): min=122, max=771, avg=446.55, stdev=92.49 00:09:56.851 lat (usec): min=135, max=806, avg=478.29, stdev=96.19 00:09:56.851 clat percentiles (usec): 00:09:56.851 | 1.00th=[ 227], 5.00th=[ 293], 10.00th=[ 326], 20.00th=[ 359], 00:09:56.851 | 30.00th=[ 383], 40.00th=[ 429], 50.00th=[ 465], 60.00th=[ 486], 00:09:56.851 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 570], 00:09:56.851 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 775], 99.95th=[ 775], 00:09:56.851 | 99.99th=[ 775] 00:09:56.851 bw ( KiB/s): min= 4096, max= 4096, per=34.41%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.851 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.851 lat (usec) : 250=0.85%, 500=34.09%, 750=37.97%, 1000=26.23% 00:09:56.851 lat (msec) : 2=0.09%, 50=0.76% 00:09:56.851 cpu : usr=1.90%, sys=2.70%, ctx=1058, majf=0, minf=1 00:09:56.851 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.851 issued rwts: total=512,544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.852 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.852 job2: (groupid=0, jobs=1): err= 0: pid=2988257: Wed Oct 16 06:52:56 2024 00:09:56.852 read: IOPS=60, BW=242KiB/s (247kB/s)(252KiB/1043msec) 00:09:56.852 slat (nsec): min=7228, max=52159, avg=23785.54, stdev=8938.70 00:09:56.852 clat (usec): min=506, max=42521, avg=12419.93, stdev=18639.43 00:09:56.852 lat (usec): min=517, max=42549, avg=12443.72, stdev=18642.19 00:09:56.852 clat percentiles (usec): 00:09:56.852 | 1.00th=[ 506], 5.00th=[ 545], 10.00th=[ 635], 20.00th=[ 685], 00:09:56.852 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 791], 60.00th=[ 840], 00:09:56.852 | 70.00th=[ 1020], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:56.852 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:56.852 | 99.99th=[42730] 00:09:56.852 write: IOPS=490, BW=1964KiB/s (2011kB/s)(2048KiB/1043msec); 0 zone resets 00:09:56.852 slat (nsec): min=10394, max=70544, avg=30340.80, stdev=11222.24 00:09:56.852 clat (usec): min=225, max=703, avg=460.26, stdev=92.75 00:09:56.852 lat (usec): min=236, max=751, avg=490.60, stdev=98.81 00:09:56.852 clat percentiles (usec): 00:09:56.852 | 1.00th=[ 262], 5.00th=[ 297], 10.00th=[ 326], 20.00th=[ 371], 00:09:56.852 | 30.00th=[ 408], 40.00th=[ 449], 50.00th=[ 474], 60.00th=[ 494], 00:09:56.852 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 603], 00:09:56.852 | 99.00th=[ 652], 99.50th=[ 701], 99.90th=[ 701], 99.95th=[ 701], 00:09:56.852 | 99.99th=[ 701] 00:09:56.852 bw ( KiB/s): min= 4096, max= 4096, per=34.41%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.852 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.852 lat (usec) : 250=0.35%, 500=56.87%, 750=36.17%, 1000=3.30% 00:09:56.852 lat (msec) : 2=0.17%, 50=3.13% 00:09:56.852 cpu : usr=0.96%, sys=1.44%, ctx=576, majf=0, minf=1 00:09:56.852 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.852 issued rwts: total=63,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.852 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.852 job3: (groupid=0, jobs=1): err= 0: pid=2988260: Wed Oct 16 06:52:56 2024 00:09:56.852 read: IOPS=617, BW=2470KiB/s (2529kB/s)(2472KiB/1001msec) 00:09:56.852 slat (nsec): min=7112, max=56361, avg=24587.54, stdev=7348.80 00:09:56.852 clat (usec): min=298, max=41759, avg=803.42, stdev=1652.69 00:09:56.852 lat (usec): min=325, max=41771, avg=828.01, stdev=1652.23 00:09:56.852 clat percentiles (usec): 00:09:56.852 | 1.00th=[ 445], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 668], 00:09:56.852 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 758], 60.00th=[ 775], 00:09:56.852 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 832], 95.00th=[ 857], 00:09:56.852 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[41681], 99.95th=[41681], 00:09:56.852 | 99.99th=[41681] 00:09:56.852 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:56.852 slat (nsec): min=10325, max=55615, avg=29915.69, stdev=10724.70 00:09:56.852 clat (usec): min=192, max=795, avg=432.48, stdev=107.70 00:09:56.852 lat (usec): min=227, max=829, avg=462.39, stdev=111.14 00:09:56.852 clat percentiles (usec): 00:09:56.852 | 1.00th=[ 227], 5.00th=[ 269], 10.00th=[ 310], 20.00th=[ 343], 00:09:56.852 | 30.00th=[ 363], 40.00th=[ 412], 50.00th=[ 433], 60.00th=[ 449], 00:09:56.852 | 70.00th=[ 469], 80.00th=[ 498], 90.00th=[ 586], 95.00th=[ 652], 00:09:56.852 | 99.00th=[ 742], 99.50th=[ 766], 99.90th=[ 791], 99.95th=[ 799], 00:09:56.852 | 99.99th=[ 799] 00:09:56.852 bw ( KiB/s): min= 4096, max= 4096, per=34.41%, avg=4096.00, stdev= 0.00, samples=1 00:09:56.852 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:56.852 lat (usec) : 250=1.77%, 500=49.57%, 750=28.75%, 1000=19.85% 00:09:56.852 lat (msec) : 50=0.06% 00:09:56.852 cpu : usr=3.10%, sys=3.90%, ctx=1644, majf=0, minf=1 00:09:56.852 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.852 issued rwts: total=618,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.852 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.852 00:09:56.852 Run status group 0 (all jobs): 00:09:56.852 READ: bw=6543KiB/s (6700kB/s), 242KiB/s-2470KiB/s (247kB/s-2529kB/s), io=6824KiB (6988kB), run=1001-1043msec 00:09:56.852 WRITE: bw=11.6MiB/s (12.2MB/s), 1964KiB/s-4092KiB/s (2011kB/s-4190kB/s), io=12.1MiB (12.7MB), run=1001-1043msec 00:09:56.852 00:09:56.852 Disk stats (read/write): 00:09:56.852 nvme0n1: ios=534/720, merge=0/0, ticks=1391/240, in_queue=1631, util=83.47% 00:09:56.852 nvme0n2: ios=549/512, merge=0/0, ticks=1648/224, in_queue=1872, util=87.54% 00:09:56.852 nvme0n3: ios=97/512, merge=0/0, ticks=680/233, in_queue=913, util=95.03% 00:09:56.852 nvme0n4: ios=534/835, merge=0/0, ticks=1284/342, in_queue=1626, util=93.78% 00:09:56.852 06:52:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:56.852 [global] 00:09:56.852 thread=1 00:09:56.852 invalidate=1 00:09:56.852 rw=randwrite 00:09:56.852 time_based=1 00:09:56.852 runtime=1 00:09:56.852 ioengine=libaio 00:09:56.852 direct=1 00:09:56.852 bs=4096 00:09:56.852 iodepth=1 00:09:56.852 norandommap=0 00:09:56.852 numjobs=1 00:09:56.852 00:09:56.852 verify_dump=1 00:09:56.852 verify_backlog=512 00:09:56.852 verify_state_save=0 00:09:56.852 do_verify=1 00:09:56.852 verify=crc32c-intel 00:09:56.852 [job0] 00:09:56.852 filename=/dev/nvme0n1 00:09:56.852 [job1] 00:09:56.852 filename=/dev/nvme0n2 00:09:56.852 [job2] 00:09:56.852 filename=/dev/nvme0n3 00:09:56.852 [job3] 00:09:56.852 filename=/dev/nvme0n4 00:09:56.852 Could not set queue depth (nvme0n1) 00:09:56.852 Could not set queue depth (nvme0n2) 00:09:56.852 Could not set queue depth (nvme0n3) 00:09:56.852 Could not set queue depth (nvme0n4) 00:09:57.113 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.113 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.113 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.113 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.113 fio-3.35 00:09:57.113 Starting 4 threads 00:09:58.499 00:09:58.499 job0: (groupid=0, jobs=1): err= 0: pid=2988778: Wed Oct 16 06:52:57 2024 00:09:58.499 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:09:58.499 slat (nsec): min=26181, max=26823, avg=26459.31, stdev=188.55 00:09:58.499 clat (usec): min=40957, max=43041, avg=41903.05, stdev=444.99 00:09:58.499 lat (usec): min=40983, max=43068, avg=41929.51, stdev=445.07 00:09:58.499 clat percentiles (usec): 00:09:58.499 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:58.499 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:58.499 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:09:58.499 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:58.499 | 99.99th=[43254] 00:09:58.499 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:58.499 slat (nsec): min=9427, max=71177, avg=31895.50, stdev=7877.18 00:09:58.499 clat (usec): min=253, max=1075, avg=606.64, stdev=142.65 00:09:58.499 lat (usec): min=263, max=1109, avg=638.54, stdev=145.44 00:09:58.499 clat percentiles (usec): 00:09:58.499 | 1.00th=[ 277], 5.00th=[ 359], 10.00th=[ 429], 20.00th=[ 490], 00:09:58.499 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 644], 00:09:58.499 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 848], 00:09:58.499 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1074], 99.95th=[ 1074], 00:09:58.499 | 99.99th=[ 1074] 00:09:58.499 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:58.499 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:58.499 lat (usec) : 500=22.16%, 750=60.80%, 1000=13.64% 00:09:58.499 lat (msec) : 2=0.38%, 50=3.03% 00:09:58.499 cpu : usr=1.30%, sys=1.20%, ctx=531, majf=0, minf=1 00:09:58.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.499 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.499 job1: (groupid=0, jobs=1): err= 0: pid=2988779: Wed Oct 16 06:52:57 2024 00:09:58.499 read: IOPS=698, BW=2793KiB/s (2860kB/s)(2796KiB/1001msec) 00:09:58.499 slat (nsec): min=7095, max=44905, avg=24538.41, stdev=7597.79 00:09:58.499 clat (usec): min=406, max=914, avg=726.68, stdev=61.20 00:09:58.499 lat (usec): min=434, max=957, avg=751.22, stdev=62.54 00:09:58.499 clat percentiles (usec): 00:09:58.499 | 1.00th=[ 529], 5.00th=[ 611], 10.00th=[ 635], 20.00th=[ 676], 00:09:58.499 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 742], 60.00th=[ 750], 00:09:58.499 | 70.00th=[ 758], 80.00th=[ 775], 90.00th=[ 791], 95.00th=[ 807], 00:09:58.499 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 914], 99.95th=[ 914], 00:09:58.499 | 99.99th=[ 914] 00:09:58.500 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:58.500 slat (nsec): min=9562, max=65144, avg=29233.70, stdev=10234.60 00:09:58.500 clat (usec): min=171, max=747, avg=422.35, stdev=77.65 00:09:58.500 lat (usec): min=204, max=781, avg=451.59, stdev=81.42 00:09:58.500 clat percentiles (usec): 00:09:58.500 | 1.00th=[ 239], 5.00th=[ 281], 10.00th=[ 322], 20.00th=[ 355], 00:09:58.500 | 30.00th=[ 388], 40.00th=[ 416], 50.00th=[ 433], 60.00th=[ 445], 00:09:58.500 | 70.00th=[ 461], 80.00th=[ 482], 90.00th=[ 510], 95.00th=[ 545], 00:09:58.500 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 717], 99.95th=[ 750], 00:09:58.500 | 99.99th=[ 750] 00:09:58.500 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:58.500 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:58.500 lat (usec) : 250=0.75%, 500=51.19%, 750=32.21%, 1000=15.84% 00:09:58.500 cpu : usr=3.30%, sys=4.00%, ctx=1725, majf=0, minf=1 00:09:58.500 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.500 issued rwts: total=699,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.500 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.500 job2: (groupid=0, jobs=1): err= 0: pid=2988780: Wed Oct 16 06:52:57 2024 00:09:58.500 read: IOPS=18, BW=73.8KiB/s (75.6kB/s)(76.0KiB/1030msec) 00:09:58.500 slat (nsec): min=26971, max=27876, avg=27389.32, stdev=195.56 00:09:58.500 clat (usec): min=40953, max=42537, avg=41784.31, stdev=432.70 00:09:58.500 lat (usec): min=40981, max=42564, avg=41811.70, stdev=432.56 00:09:58.500 clat percentiles (usec): 00:09:58.500 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:58.500 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:58.500 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:09:58.500 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:58.500 | 99.99th=[42730] 00:09:58.500 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:09:58.500 slat (nsec): min=9692, max=54045, avg=29594.29, stdev=9969.29 00:09:58.500 clat (usec): min=143, max=663, avg=422.73, stdev=96.70 00:09:58.500 lat (usec): min=168, max=696, avg=452.33, stdev=101.76 00:09:58.500 clat percentiles (usec): 00:09:58.500 | 1.00th=[ 174], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 322], 00:09:58.500 | 30.00th=[ 371], 40.00th=[ 404], 50.00th=[ 437], 60.00th=[ 461], 00:09:58.500 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 570], 00:09:58.500 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 660], 99.95th=[ 660], 00:09:58.500 | 99.99th=[ 660] 00:09:58.500 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:58.500 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:58.500 lat (usec) : 250=1.88%, 500=73.82%, 750=20.72% 00:09:58.500 lat (msec) : 50=3.58% 00:09:58.500 cpu : usr=1.07%, sys=1.17%, ctx=532, majf=0, minf=1 00:09:58.500 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.500 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.500 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.500 job3: (groupid=0, jobs=1): err= 0: pid=2988781: Wed Oct 16 06:52:57 2024 00:09:58.500 read: IOPS=18, BW=75.5KiB/s (77.4kB/s)(76.0KiB/1006msec) 00:09:58.500 slat (nsec): min=26345, max=26940, avg=26802.95, stdev=149.02 00:09:58.500 clat (usec): min=40943, max=42108, avg=41862.32, stdev=329.68 00:09:58.500 lat (usec): min=40969, max=42135, avg=41889.13, stdev=329.68 00:09:58.500 clat percentiles (usec): 00:09:58.500 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:58.500 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:58.500 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:58.500 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:58.500 | 99.99th=[42206] 00:09:58.500 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:09:58.500 slat (nsec): min=3583, max=53377, avg=28569.80, stdev=11043.98 00:09:58.500 clat (usec): min=189, max=823, avg=372.89, stdev=91.75 00:09:58.500 lat (usec): min=222, max=828, avg=401.46, stdev=93.72 00:09:58.500 clat percentiles (usec): 00:09:58.500 | 1.00th=[ 204], 5.00th=[ 233], 10.00th=[ 255], 20.00th=[ 302], 00:09:58.500 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 379], 00:09:58.500 | 70.00th=[ 404], 80.00th=[ 449], 90.00th=[ 494], 95.00th=[ 537], 00:09:58.500 | 99.00th=[ 611], 99.50th=[ 693], 99.90th=[ 824], 99.95th=[ 824], 00:09:58.500 | 99.99th=[ 824] 00:09:58.500 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:58.500 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:58.500 lat (usec) : 250=9.04%, 500=78.72%, 750=8.47%, 1000=0.19% 00:09:58.500 lat (msec) : 50=3.58% 00:09:58.500 cpu : usr=1.00%, sys=1.19%, ctx=532, majf=0, minf=1 00:09:58.500 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.500 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.500 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.500 00:09:58.500 Run status group 0 (all jobs): 00:09:58.500 READ: bw=2924KiB/s (2994kB/s), 63.9KiB/s-2793KiB/s (65.4kB/s-2860kB/s), io=3012KiB (3084kB), run=1001-1030msec 00:09:58.500 WRITE: bw=9942KiB/s (10.2MB/s), 1988KiB/s-4092KiB/s (2036kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1030msec 00:09:58.500 00:09:58.500 Disk stats (read/write): 00:09:58.500 nvme0n1: ios=61/512, merge=0/0, ticks=1050/295, in_queue=1345, util=84.27% 00:09:58.500 nvme0n2: ios=565/978, merge=0/0, ticks=529/405, in_queue=934, util=88.79% 00:09:58.500 nvme0n3: ios=77/512, merge=0/0, ticks=743/206, in_queue=949, util=95.36% 00:09:58.500 nvme0n4: ios=38/512, merge=0/0, ticks=1514/168, in_queue=1682, util=94.24% 00:09:58.500 06:52:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:58.500 [global] 00:09:58.500 thread=1 00:09:58.500 invalidate=1 00:09:58.500 rw=write 00:09:58.500 time_based=1 00:09:58.500 runtime=1 00:09:58.500 ioengine=libaio 00:09:58.500 direct=1 00:09:58.500 bs=4096 00:09:58.500 iodepth=128 00:09:58.500 norandommap=0 00:09:58.500 numjobs=1 00:09:58.500 00:09:58.500 verify_dump=1 00:09:58.500 verify_backlog=512 00:09:58.500 verify_state_save=0 00:09:58.500 do_verify=1 00:09:58.500 verify=crc32c-intel 00:09:58.500 [job0] 00:09:58.500 filename=/dev/nvme0n1 00:09:58.500 [job1] 00:09:58.500 filename=/dev/nvme0n2 00:09:58.500 [job2] 00:09:58.500 filename=/dev/nvme0n3 00:09:58.500 [job3] 00:09:58.500 filename=/dev/nvme0n4 00:09:58.500 Could not set queue depth (nvme0n1) 00:09:58.500 Could not set queue depth (nvme0n2) 00:09:58.500 Could not set queue depth (nvme0n3) 00:09:58.500 Could not set queue depth (nvme0n4) 00:09:58.761 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.761 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.761 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.761 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.761 fio-3.35 00:09:58.761 Starting 4 threads 00:10:00.150 00:10:00.150 job0: (groupid=0, jobs=1): err= 0: pid=2989307: Wed Oct 16 06:52:59 2024 00:10:00.150 read: IOPS=4117, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1005msec) 00:10:00.150 slat (nsec): min=926, max=20662k, avg=124721.22, stdev=949027.03 00:10:00.150 clat (usec): min=3035, max=61292, avg=16666.65, stdev=12971.50 00:10:00.150 lat (usec): min=5078, max=61299, avg=16791.37, stdev=13036.48 00:10:00.150 clat percentiles (usec): 00:10:00.150 | 1.00th=[ 5735], 5.00th=[ 7373], 10.00th=[ 7832], 20.00th=[ 8291], 00:10:00.150 | 30.00th=[ 8848], 40.00th=[10421], 50.00th=[12649], 60.00th=[13173], 00:10:00.150 | 70.00th=[14222], 80.00th=[19530], 90.00th=[36439], 95.00th=[52167], 00:10:00.150 | 99.00th=[60031], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:10:00.150 | 99.99th=[61080] 00:10:00.150 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:10:00.150 slat (nsec): min=1687, max=13139k, avg=99872.55, stdev=572224.90 00:10:00.150 clat (usec): min=4418, max=44103, avg=12500.90, stdev=8650.65 00:10:00.150 lat (usec): min=4424, max=44114, avg=12600.78, stdev=8706.62 00:10:00.150 clat percentiles (usec): 00:10:00.150 | 1.00th=[ 4555], 5.00th=[ 6783], 10.00th=[ 7504], 20.00th=[ 7963], 00:10:00.150 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:10:00.150 | 70.00th=[10683], 80.00th=[13173], 90.00th=[27395], 95.00th=[36439], 00:10:00.150 | 99.00th=[41157], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:10:00.150 | 99.99th=[44303] 00:10:00.150 bw ( KiB/s): min=15944, max=20232, per=17.89%, avg=18088.00, stdev=3032.07, samples=2 00:10:00.150 iops : min= 3986, max= 5058, avg=4522.00, stdev=758.02, samples=2 00:10:00.150 lat (msec) : 4=0.01%, 10=50.13%, 20=33.64%, 50=13.03%, 100=3.19% 00:10:00.150 cpu : usr=2.29%, sys=5.78%, ctx=442, majf=0, minf=1 00:10:00.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:00.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.150 issued rwts: total=4138,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.150 job1: (groupid=0, jobs=1): err= 0: pid=2989308: Wed Oct 16 06:52:59 2024 00:10:00.150 read: IOPS=6703, BW=26.2MiB/s (27.5MB/s)(26.4MiB/1007msec) 00:10:00.150 slat (nsec): min=956, max=10482k, avg=70072.45, stdev=509193.66 00:10:00.150 clat (usec): min=2547, max=24613, avg=8935.78, stdev=3302.52 00:10:00.150 lat (usec): min=2553, max=24615, avg=9005.85, stdev=3327.53 00:10:00.150 clat percentiles (usec): 00:10:00.150 | 1.00th=[ 3097], 5.00th=[ 5276], 10.00th=[ 6259], 20.00th=[ 6915], 00:10:00.150 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8586], 00:10:00.150 | 70.00th=[ 9241], 80.00th=[10552], 90.00th=[13698], 95.00th=[15795], 00:10:00.150 | 99.00th=[19530], 99.50th=[21627], 99.90th=[23725], 99.95th=[24511], 00:10:00.150 | 99.99th=[24511] 00:10:00.150 write: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec); 0 zone resets 00:10:00.150 slat (nsec): min=1651, max=8653.4k, avg=65989.74, stdev=347841.32 00:10:00.150 clat (usec): min=984, max=24612, avg=9341.41, stdev=4013.48 00:10:00.150 lat (usec): min=995, max=24614, avg=9407.40, stdev=4036.35 00:10:00.150 clat percentiles (usec): 00:10:00.150 | 1.00th=[ 2311], 5.00th=[ 3982], 10.00th=[ 4555], 20.00th=[ 6063], 00:10:00.150 | 30.00th=[ 6849], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 9503], 00:10:00.150 | 70.00th=[11863], 80.00th=[13042], 90.00th=[14615], 95.00th=[17171], 00:10:00.150 | 99.00th=[19792], 99.50th=[20317], 99.90th=[21103], 99.95th=[21103], 00:10:00.150 | 99.99th=[24511] 00:10:00.150 bw ( KiB/s): min=27824, max=29256, per=28.22%, avg=28540.00, stdev=1012.58, samples=2 00:10:00.150 iops : min= 6956, max= 7314, avg=7135.00, stdev=253.14, samples=2 00:10:00.150 lat (usec) : 1000=0.01% 00:10:00.150 lat (msec) : 2=0.32%, 4=3.79%, 10=64.26%, 20=30.92%, 50=0.69% 00:10:00.150 cpu : usr=5.37%, sys=6.76%, ctx=715, majf=0, minf=1 00:10:00.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:00.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.150 issued rwts: total=6750,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.150 job2: (groupid=0, jobs=1): err= 0: pid=2989310: Wed Oct 16 06:52:59 2024 00:10:00.150 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:10:00.150 slat (nsec): min=954, max=9564.8k, avg=81363.71, stdev=439944.36 00:10:00.150 clat (usec): min=2846, max=21530, avg=10544.48, stdev=2130.73 00:10:00.150 lat (usec): min=2848, max=21531, avg=10625.85, stdev=2128.61 00:10:00.150 clat percentiles (usec): 00:10:00.150 | 1.00th=[ 3326], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9372], 00:10:00.150 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[10814], 00:10:00.150 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12387], 95.00th=[13435], 00:10:00.150 | 99.00th=[18744], 99.50th=[18744], 99.90th=[20055], 99.95th=[20055], 00:10:00.150 | 99.99th=[21627] 00:10:00.150 write: IOPS=6272, BW=24.5MiB/s (25.7MB/s)(24.6MiB/1003msec); 0 zone resets 00:10:00.150 slat (nsec): min=1619, max=12923k, avg=75713.75, stdev=469566.88 00:10:00.150 clat (usec): min=881, max=38011, avg=9782.76, stdev=2731.45 00:10:00.150 lat (usec): min=1921, max=38015, avg=9858.48, stdev=2738.12 00:10:00.150 clat percentiles (usec): 00:10:00.150 | 1.00th=[ 2999], 5.00th=[ 5342], 10.00th=[ 7308], 20.00th=[ 8225], 00:10:00.150 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:10:00.150 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11600], 95.00th=[12387], 00:10:00.150 | 99.00th=[17433], 99.50th=[22152], 99.90th=[35914], 99.95th=[38011], 00:10:00.150 | 99.99th=[38011] 00:10:00.150 bw ( KiB/s): min=24232, max=25072, per=24.38%, avg=24652.00, stdev=593.97, samples=2 00:10:00.150 iops : min= 6058, max= 6268, avg=6163.00, stdev=148.49, samples=2 00:10:00.150 lat (usec) : 1000=0.01% 00:10:00.150 lat (msec) : 2=0.06%, 4=2.20%, 10=44.19%, 20=53.07%, 50=0.47% 00:10:00.150 cpu : usr=2.59%, sys=4.89%, ctx=596, majf=0, minf=1 00:10:00.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:00.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.150 issued rwts: total=6144,6291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.150 job3: (groupid=0, jobs=1): err= 0: pid=2989311: Wed Oct 16 06:52:59 2024 00:10:00.150 read: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec) 00:10:00.150 slat (nsec): min=947, max=7355.4k, avg=69735.49, stdev=457744.28 00:10:00.150 clat (usec): min=4743, max=20242, avg=8962.49, stdev=1969.12 00:10:00.150 lat (usec): min=4750, max=20269, avg=9032.23, stdev=2012.39 00:10:00.150 clat percentiles (usec): 00:10:00.150 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 7439], 20.00th=[ 7832], 00:10:00.150 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:10:00.150 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[11600], 95.00th=[13566], 00:10:00.150 | 99.00th=[16057], 99.50th=[17171], 99.90th=[17171], 99.95th=[19268], 00:10:00.150 | 99.99th=[20317] 00:10:00.150 write: IOPS=7346, BW=28.7MiB/s (30.1MB/s)(28.9MiB/1006msec); 0 zone resets 00:10:00.150 slat (nsec): min=1658, max=5767.2k, avg=62215.43, stdev=347516.49 00:10:00.150 clat (usec): min=1264, max=24737, avg=8573.98, stdev=2144.95 00:10:00.150 lat (usec): min=1278, max=24746, avg=8636.20, stdev=2162.36 00:10:00.150 clat percentiles (usec): 00:10:00.150 | 1.00th=[ 4424], 5.00th=[ 5735], 10.00th=[ 6587], 20.00th=[ 7439], 00:10:00.150 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8225], 00:10:00.150 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[10945], 95.00th=[12256], 00:10:00.150 | 99.00th=[17433], 99.50th=[18482], 99.90th=[21365], 99.95th=[21627], 00:10:00.150 | 99.99th=[24773] 00:10:00.150 bw ( KiB/s): min=28672, max=29432, per=28.73%, avg=29052.00, stdev=537.40, samples=2 00:10:00.150 iops : min= 7168, max= 7358, avg=7263.00, stdev=134.35, samples=2 00:10:00.150 lat (msec) : 2=0.01%, 4=0.27%, 10=76.87%, 20=22.73%, 50=0.12% 00:10:00.150 cpu : usr=4.88%, sys=6.27%, ctx=766, majf=0, minf=2 00:10:00.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:00.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.150 issued rwts: total=7168,7391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.150 00:10:00.150 Run status group 0 (all jobs): 00:10:00.150 READ: bw=93.9MiB/s (98.4MB/s), 16.1MiB/s-27.8MiB/s (16.9MB/s-29.2MB/s), io=94.5MiB (99.1MB), run=1003-1007msec 00:10:00.150 WRITE: bw=98.8MiB/s (104MB/s), 17.9MiB/s-28.7MiB/s (18.8MB/s-30.1MB/s), io=99.4MiB (104MB), run=1003-1007msec 00:10:00.150 00:10:00.150 Disk stats (read/write): 00:10:00.150 nvme0n1: ios=3093/3232, merge=0/0, ticks=17894/15828, in_queue=33722, util=83.97% 00:10:00.150 nvme0n2: ios=5672/5799, merge=0/0, ticks=45076/50637, in_queue=95713, util=91.03% 00:10:00.150 nvme0n3: ios=5174/5408, merge=0/0, ticks=21536/22286, in_queue=43822, util=94.73% 00:10:00.150 nvme0n4: ios=6165/6269, merge=0/0, ticks=29057/24549, in_queue=53606, util=93.80% 00:10:00.150 06:52:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:00.150 [global] 00:10:00.150 thread=1 00:10:00.150 invalidate=1 00:10:00.150 rw=randwrite 00:10:00.150 time_based=1 00:10:00.150 runtime=1 00:10:00.150 ioengine=libaio 00:10:00.150 direct=1 00:10:00.150 bs=4096 00:10:00.150 iodepth=128 00:10:00.150 norandommap=0 00:10:00.150 numjobs=1 00:10:00.150 00:10:00.150 verify_dump=1 00:10:00.150 verify_backlog=512 00:10:00.150 verify_state_save=0 00:10:00.150 do_verify=1 00:10:00.150 verify=crc32c-intel 00:10:00.150 [job0] 00:10:00.150 filename=/dev/nvme0n1 00:10:00.150 [job1] 00:10:00.151 filename=/dev/nvme0n2 00:10:00.151 [job2] 00:10:00.151 filename=/dev/nvme0n3 00:10:00.151 [job3] 00:10:00.151 filename=/dev/nvme0n4 00:10:00.151 Could not set queue depth (nvme0n1) 00:10:00.151 Could not set queue depth (nvme0n2) 00:10:00.151 Could not set queue depth (nvme0n3) 00:10:00.151 Could not set queue depth (nvme0n4) 00:10:00.412 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.412 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.412 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.412 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.412 fio-3.35 00:10:00.412 Starting 4 threads 00:10:01.807 00:10:01.807 job0: (groupid=0, jobs=1): err= 0: pid=2989829: Wed Oct 16 06:53:01 2024 00:10:01.807 read: IOPS=2116, BW=8466KiB/s (8670kB/s)(8568KiB/1012msec) 00:10:01.807 slat (nsec): min=901, max=37107k, avg=272099.77, stdev=2166622.32 00:10:01.807 clat (msec): min=4, max=146, avg=35.87, stdev=34.77 00:10:01.807 lat (msec): min=4, max=146, avg=36.14, stdev=34.98 00:10:01.807 clat percentiles (msec): 00:10:01.807 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:10:01.807 | 30.00th=[ 11], 40.00th=[ 15], 50.00th=[ 21], 60.00th=[ 37], 00:10:01.807 | 70.00th=[ 47], 80.00th=[ 61], 90.00th=[ 77], 95.00th=[ 124], 00:10:01.807 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 146], 99.95th=[ 146], 00:10:01.807 | 99.99th=[ 146] 00:10:01.807 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:10:01.807 slat (nsec): min=1494, max=17572k, avg=159460.51, stdev=862139.72 00:10:01.807 clat (msec): min=2, max=102, avg=20.03, stdev=22.81 00:10:01.807 lat (msec): min=2, max=102, avg=20.19, stdev=22.99 00:10:01.807 clat percentiles (msec): 00:10:01.807 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:10:01.807 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 11], 60.00th=[ 14], 00:10:01.807 | 70.00th=[ 18], 80.00th=[ 29], 90.00th=[ 46], 95.00th=[ 87], 00:10:01.807 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 103], 00:10:01.807 | 99.99th=[ 103] 00:10:01.807 bw ( KiB/s): min= 6928, max=13280, per=16.09%, avg=10104.00, stdev=4491.54, samples=2 00:10:01.807 iops : min= 1732, max= 3320, avg=2526.00, stdev=1122.89, samples=2 00:10:01.807 lat (msec) : 4=0.68%, 10=38.01%, 20=22.99%, 50=21.69%, 100=12.59% 00:10:01.807 lat (msec) : 250=4.04% 00:10:01.807 cpu : usr=1.38%, sys=2.67%, ctx=282, majf=0, minf=1 00:10:01.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:01.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.807 issued rwts: total=2142,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.807 job1: (groupid=0, jobs=1): err= 0: pid=2989830: Wed Oct 16 06:53:01 2024 00:10:01.807 read: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1012msec) 00:10:01.807 slat (nsec): min=904, max=10926k, avg=121941.76, stdev=846300.25 00:10:01.807 clat (usec): min=4272, max=43953, avg=16575.13, stdev=7434.15 00:10:01.807 lat (usec): min=5551, max=43979, avg=16697.07, stdev=7505.20 00:10:01.807 clat percentiles (usec): 00:10:01.807 | 1.00th=[ 6259], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[10683], 00:10:01.807 | 30.00th=[11207], 40.00th=[13173], 50.00th=[14877], 60.00th=[16909], 00:10:01.807 | 70.00th=[19530], 80.00th=[21627], 90.00th=[28181], 95.00th=[31327], 00:10:01.807 | 99.00th=[38011], 99.50th=[38011], 99.90th=[41681], 99.95th=[43254], 00:10:01.807 | 99.99th=[43779] 00:10:01.807 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:10:01.807 slat (nsec): min=1524, max=10140k, avg=129120.87, stdev=707687.88 00:10:01.807 clat (usec): min=3250, max=76237, avg=16274.83, stdev=15888.90 00:10:01.807 lat (usec): min=3284, max=76245, avg=16403.95, stdev=15994.22 00:10:01.807 clat percentiles (usec): 00:10:01.807 | 1.00th=[ 3490], 5.00th=[ 4293], 10.00th=[ 5145], 20.00th=[ 6128], 00:10:01.807 | 30.00th=[ 6980], 40.00th=[ 8291], 50.00th=[ 9503], 60.00th=[10945], 00:10:01.807 | 70.00th=[15401], 80.00th=[22676], 90.00th=[41681], 95.00th=[54789], 00:10:01.807 | 99.00th=[73925], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:10:01.807 | 99.99th=[76022] 00:10:01.807 bw ( KiB/s): min= 8112, max=24576, per=26.02%, avg=16344.00, stdev=11641.81, samples=2 00:10:01.807 iops : min= 2028, max= 6144, avg=4086.00, stdev=2910.45, samples=2 00:10:01.807 lat (msec) : 4=2.01%, 10=35.98%, 20=35.64%, 50=22.80%, 100=3.57% 00:10:01.807 cpu : usr=3.36%, sys=3.76%, ctx=305, majf=0, minf=3 00:10:01.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:01.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.807 issued rwts: total=3702,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.807 job2: (groupid=0, jobs=1): err= 0: pid=2989831: Wed Oct 16 06:53:01 2024 00:10:01.807 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:10:01.807 slat (nsec): min=981, max=15419k, avg=92152.07, stdev=725110.15 00:10:01.807 clat (usec): min=1473, max=40533, avg=12346.12, stdev=6600.96 00:10:01.807 lat (usec): min=1491, max=40540, avg=12438.28, stdev=6653.45 00:10:01.807 clat percentiles (usec): 00:10:01.807 | 1.00th=[ 1942], 5.00th=[ 4621], 10.00th=[ 6456], 20.00th=[ 8094], 00:10:01.807 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11600], 00:10:01.807 | 70.00th=[14484], 80.00th=[17695], 90.00th=[19530], 95.00th=[23725], 00:10:01.807 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:10:01.807 | 99.99th=[40633] 00:10:01.807 write: IOPS=4570, BW=17.9MiB/s (18.7MB/s)(18.1MiB/1012msec); 0 zone resets 00:10:01.807 slat (nsec): min=1613, max=15315k, avg=112651.81, stdev=704541.38 00:10:01.807 clat (usec): min=709, max=97037, avg=15291.83, stdev=18579.02 00:10:01.807 lat (usec): min=742, max=97047, avg=15404.48, stdev=18706.07 00:10:01.807 clat percentiles (usec): 00:10:01.807 | 1.00th=[ 1647], 5.00th=[ 3818], 10.00th=[ 4359], 20.00th=[ 5342], 00:10:01.807 | 30.00th=[ 6128], 40.00th=[ 6849], 50.00th=[ 8848], 60.00th=[ 9634], 00:10:01.807 | 70.00th=[10552], 80.00th=[17171], 90.00th=[41681], 95.00th=[52691], 00:10:01.807 | 99.00th=[92799], 99.50th=[93848], 99.90th=[96994], 99.95th=[96994], 00:10:01.807 | 99.99th=[96994] 00:10:01.807 bw ( KiB/s): min=15472, max=21392, per=29.35%, avg=18432.00, stdev=4186.07, samples=2 00:10:01.807 iops : min= 3868, max= 5348, avg=4608.00, stdev=1046.52, samples=2 00:10:01.807 lat (usec) : 750=0.01%, 1000=0.21% 00:10:01.807 lat (msec) : 2=1.25%, 4=3.44%, 10=50.90%, 20=30.60%, 50=10.70% 00:10:01.807 lat (msec) : 100=2.89% 00:10:01.807 cpu : usr=3.17%, sys=6.23%, ctx=278, majf=0, minf=1 00:10:01.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:01.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.807 issued rwts: total=4608,4625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.807 job3: (groupid=0, jobs=1): err= 0: pid=2989832: Wed Oct 16 06:53:01 2024 00:10:01.807 read: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1012msec) 00:10:01.807 slat (nsec): min=929, max=11798k, avg=113492.67, stdev=744215.34 00:10:01.807 clat (usec): min=4924, max=59092, avg=12964.64, stdev=7705.72 00:10:01.807 lat (usec): min=4933, max=59102, avg=13078.13, stdev=7797.74 00:10:01.807 clat percentiles (usec): 00:10:01.807 | 1.00th=[ 5276], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 7767], 00:10:01.807 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11338], 00:10:01.807 | 70.00th=[15795], 80.00th=[16909], 90.00th=[18744], 95.00th=[29230], 00:10:01.807 | 99.00th=[46924], 99.50th=[50070], 99.90th=[56361], 99.95th=[56361], 00:10:01.807 | 99.99th=[58983] 00:10:01.807 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:10:01.807 slat (nsec): min=1579, max=11297k, avg=105384.32, stdev=642104.50 00:10:01.807 clat (usec): min=816, max=68063, avg=16004.93, stdev=15582.09 00:10:01.807 lat (usec): min=825, max=68071, avg=16110.32, stdev=15676.83 00:10:01.807 clat percentiles (usec): 00:10:01.807 | 1.00th=[ 1221], 5.00th=[ 3982], 10.00th=[ 5800], 20.00th=[ 6587], 00:10:01.807 | 30.00th=[ 7504], 40.00th=[ 7963], 50.00th=[ 8979], 60.00th=[10421], 00:10:01.807 | 70.00th=[13435], 80.00th=[21890], 90.00th=[45351], 95.00th=[54264], 00:10:01.807 | 99.00th=[63701], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:10:01.807 | 99.99th=[67634] 00:10:01.807 bw ( KiB/s): min=14808, max=21816, per=29.16%, avg=18312.00, stdev=4955.40, samples=2 00:10:01.807 iops : min= 3702, max= 5454, avg=4578.00, stdev=1238.85, samples=2 00:10:01.807 lat (usec) : 1000=0.03% 00:10:01.807 lat (msec) : 2=1.03%, 4=1.61%, 10=47.18%, 20=34.24%, 50=11.99% 00:10:01.807 lat (msec) : 100=3.91% 00:10:01.807 cpu : usr=2.47%, sys=5.14%, ctx=368, majf=0, minf=2 00:10:01.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:01.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.807 issued rwts: total=4194,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.807 00:10:01.807 Run status group 0 (all jobs): 00:10:01.808 READ: bw=56.5MiB/s (59.3MB/s), 8466KiB/s-17.8MiB/s (8670kB/s-18.7MB/s), io=57.2MiB (60.0MB), run=1012-1012msec 00:10:01.808 WRITE: bw=61.3MiB/s (64.3MB/s), 9.88MiB/s-17.9MiB/s (10.4MB/s-18.7MB/s), io=62.1MiB (65.1MB), run=1012-1012msec 00:10:01.808 00:10:01.808 Disk stats (read/write): 00:10:01.808 nvme0n1: ios=1441/1536, merge=0/0, ticks=15722/11249, in_queue=26971, util=86.87% 00:10:01.808 nvme0n2: ios=3579/3584, merge=0/0, ticks=41219/31474, in_queue=72693, util=87.26% 00:10:01.808 nvme0n3: ios=3618/3659, merge=0/0, ticks=38455/57468, in_queue=95923, util=100.00% 00:10:01.808 nvme0n4: ios=3584/3999, merge=0/0, ticks=38574/49505, in_queue=88079, util=88.98% 00:10:01.808 06:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:01.808 06:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2990161 00:10:01.808 06:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:01.808 06:53:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:01.808 [global] 00:10:01.808 thread=1 00:10:01.808 invalidate=1 00:10:01.808 rw=read 00:10:01.808 time_based=1 00:10:01.808 runtime=10 00:10:01.808 ioengine=libaio 00:10:01.808 direct=1 00:10:01.808 bs=4096 00:10:01.808 iodepth=1 00:10:01.808 norandommap=1 00:10:01.808 numjobs=1 00:10:01.808 00:10:01.808 [job0] 00:10:01.808 filename=/dev/nvme0n1 00:10:01.808 [job1] 00:10:01.808 filename=/dev/nvme0n2 00:10:01.808 [job2] 00:10:01.808 filename=/dev/nvme0n3 00:10:01.808 [job3] 00:10:01.808 filename=/dev/nvme0n4 00:10:01.808 Could not set queue depth (nvme0n1) 00:10:01.808 Could not set queue depth (nvme0n2) 00:10:01.808 Could not set queue depth (nvme0n3) 00:10:01.808 Could not set queue depth (nvme0n4) 00:10:02.068 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.068 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.068 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.068 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.068 fio-3.35 00:10:02.068 Starting 4 threads 00:10:04.656 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:04.927 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:04.927 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4747264, buflen=4096 00:10:04.927 fio: pid=2990360, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.194 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.194 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:05.194 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=970752, buflen=4096 00:10:05.194 fio: pid=2990359, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.194 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.194 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:05.456 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1396736, buflen=4096 00:10:05.456 fio: pid=2990356, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.456 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.456 06:53:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:05.456 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=307200, buflen=4096 00:10:05.456 fio: pid=2990358, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.456 00:10:05.456 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2990356: Wed Oct 16 06:53:04 2024 00:10:05.456 read: IOPS=116, BW=466KiB/s (477kB/s)(1364KiB/2930msec) 00:10:05.456 slat (usec): min=7, max=214, avg=26.16, stdev=11.18 00:10:05.456 clat (usec): min=573, max=43052, avg=8495.37, stdev=15715.21 00:10:05.456 lat (usec): min=602, max=43077, avg=8521.54, stdev=15716.02 00:10:05.456 clat percentiles (usec): 00:10:05.456 | 1.00th=[ 906], 5.00th=[ 1012], 10.00th=[ 1045], 20.00th=[ 1074], 00:10:05.456 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:10:05.456 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[41681], 95.00th=[42206], 00:10:05.456 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:10:05.456 | 99.99th=[43254] 00:10:05.456 bw ( KiB/s): min= 88, max= 880, per=22.56%, avg=529.60, stdev=362.23, samples=5 00:10:05.456 iops : min= 22, max= 220, avg=132.40, stdev=90.56, samples=5 00:10:05.456 lat (usec) : 750=0.29%, 1000=3.51% 00:10:05.456 lat (msec) : 2=77.78%, 50=18.13% 00:10:05.456 cpu : usr=0.07%, sys=0.38%, ctx=343, majf=0, minf=2 00:10:05.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.456 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.456 issued rwts: total=342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.456 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2990358: Wed Oct 16 06:53:04 2024 00:10:05.456 read: IOPS=24, BW=97.1KiB/s (99.4kB/s)(300KiB/3091msec) 00:10:05.456 slat (usec): min=24, max=2589, avg=64.18, stdev=295.22 00:10:05.456 clat (usec): min=938, max=43070, avg=40849.37, stdev=6665.65 00:10:05.456 lat (usec): min=989, max=43987, avg=40914.07, stdev=6673.76 00:10:05.456 clat percentiles (usec): 00:10:05.456 | 1.00th=[ 938], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:05.456 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:05.456 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:10:05.456 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:05.456 | 99.99th=[43254] 00:10:05.456 bw ( KiB/s): min= 92, max= 104, per=4.09%, avg=96.67, stdev= 3.93, samples=6 00:10:05.456 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:10:05.456 lat (usec) : 1000=1.32% 00:10:05.456 lat (msec) : 2=1.32%, 50=96.05% 00:10:05.456 cpu : usr=0.10%, sys=0.00%, ctx=80, majf=0, minf=2 00:10:05.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.456 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.456 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.456 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2990359: Wed Oct 16 06:53:04 2024 00:10:05.456 read: IOPS=86, BW=345KiB/s (353kB/s)(948KiB/2747msec) 00:10:05.456 slat (usec): min=26, max=129, avg=27.52, stdev= 6.93 00:10:05.456 clat (usec): min=687, max=42130, avg=11460.00, stdev=17608.80 00:10:05.456 lat (usec): min=714, max=42157, avg=11487.52, stdev=17609.31 00:10:05.456 clat percentiles (usec): 00:10:05.456 | 1.00th=[ 963], 5.00th=[ 1012], 10.00th=[ 1037], 20.00th=[ 1074], 00:10:05.456 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:10:05.456 | 70.00th=[ 1221], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:10:05.456 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:05.456 | 99.99th=[42206] 00:10:05.456 bw ( KiB/s): min= 96, max= 1408, per=15.74%, avg=369.60, stdev=580.99, samples=5 00:10:05.456 iops : min= 24, max= 352, avg=92.40, stdev=145.25, samples=5 00:10:05.457 lat (usec) : 750=0.42%, 1000=2.94% 00:10:05.457 lat (msec) : 2=70.59%, 50=25.63% 00:10:05.457 cpu : usr=0.11%, sys=0.33%, ctx=239, majf=0, minf=2 00:10:05.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.457 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.457 issued rwts: total=238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.457 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2990360: Wed Oct 16 06:53:04 2024 00:10:05.457 read: IOPS=451, BW=1804KiB/s (1847kB/s)(4636KiB/2570msec) 00:10:05.457 slat (nsec): min=6335, max=61860, avg=23973.29, stdev=8502.88 00:10:05.457 clat (usec): min=283, max=42986, avg=2169.28, stdev=7702.22 00:10:05.457 lat (usec): min=290, max=43013, avg=2193.25, stdev=7702.93 00:10:05.457 clat percentiles (usec): 00:10:05.457 | 1.00th=[ 449], 5.00th=[ 529], 10.00th=[ 553], 20.00th=[ 603], 00:10:05.457 | 30.00th=[ 635], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 717], 00:10:05.457 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 840], 00:10:05.457 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:05.457 | 99.99th=[42730] 00:10:05.457 bw ( KiB/s): min= 88, max= 5648, per=78.94%, avg=1851.20, stdev=2542.98, samples=5 00:10:05.457 iops : min= 22, max= 1412, avg=462.80, stdev=635.74, samples=5 00:10:05.457 lat (usec) : 500=3.19%, 750=69.05%, 1000=24.05% 00:10:05.457 lat (msec) : 50=3.62% 00:10:05.457 cpu : usr=0.39%, sys=1.91%, ctx=1160, majf=0, minf=1 00:10:05.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.457 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.457 issued rwts: total=1160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.457 00:10:05.457 Run status group 0 (all jobs): 00:10:05.457 READ: bw=2345KiB/s (2401kB/s), 97.1KiB/s-1804KiB/s (99.4kB/s-1847kB/s), io=7248KiB (7422kB), run=2570-3091msec 00:10:05.457 00:10:05.457 Disk stats (read/write): 00:10:05.457 nvme0n1: ios=337/0, merge=0/0, ticks=2723/0, in_queue=2723, util=92.62% 00:10:05.457 nvme0n2: ios=96/0, merge=0/0, ticks=3021/0, in_queue=3021, util=94.49% 00:10:05.457 nvme0n3: ios=232/0, merge=0/0, ticks=2493/0, in_queue=2493, util=95.42% 00:10:05.457 nvme0n4: ios=1158/0, merge=0/0, ticks=2372/0, in_queue=2372, util=96.33% 00:10:05.718 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.718 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:05.718 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.718 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:05.979 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.979 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:06.239 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.239 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2990161 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:06.500 nvmf hotplug test: fio failed as expected 00:10:06.500 06:53:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.760 rmmod nvme_tcp 00:10:06.760 rmmod nvme_fabrics 00:10:06.760 rmmod nvme_keyring 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2986520 ']' 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2986520 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2986520 ']' 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2986520 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2986520 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.760 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2986520' 00:10:06.761 killing process with pid 2986520 00:10:06.761 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2986520 00:10:06.761 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2986520 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.022 06:53:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.942 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.942 00:10:08.942 real 0m29.344s 00:10:08.942 user 2m40.058s 00:10:08.942 sys 0m9.345s 00:10:08.942 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.942 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.942 ************************************ 00:10:08.942 END TEST nvmf_fio_target 00:10:08.942 ************************************ 00:10:08.942 06:53:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:08.942 06:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:08.942 06:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.942 06:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.204 ************************************ 00:10:09.204 START TEST nvmf_bdevio 00:10:09.204 ************************************ 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:09.204 * Looking for test storage... 00:10:09.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.204 --rc genhtml_branch_coverage=1 00:10:09.204 --rc genhtml_function_coverage=1 00:10:09.204 --rc genhtml_legend=1 00:10:09.204 --rc geninfo_all_blocks=1 00:10:09.204 --rc geninfo_unexecuted_blocks=1 00:10:09.204 00:10:09.204 ' 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.204 --rc genhtml_branch_coverage=1 00:10:09.204 --rc genhtml_function_coverage=1 00:10:09.204 --rc genhtml_legend=1 00:10:09.204 --rc geninfo_all_blocks=1 00:10:09.204 --rc geninfo_unexecuted_blocks=1 00:10:09.204 00:10:09.204 ' 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.204 --rc genhtml_branch_coverage=1 00:10:09.204 --rc genhtml_function_coverage=1 00:10:09.204 --rc genhtml_legend=1 00:10:09.204 --rc geninfo_all_blocks=1 00:10:09.204 --rc geninfo_unexecuted_blocks=1 00:10:09.204 00:10:09.204 ' 00:10:09.204 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:09.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.205 --rc genhtml_branch_coverage=1 00:10:09.205 --rc genhtml_function_coverage=1 00:10:09.205 --rc genhtml_legend=1 00:10:09.205 --rc geninfo_all_blocks=1 00:10:09.205 --rc geninfo_unexecuted_blocks=1 00:10:09.205 00:10:09.205 ' 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.205 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.466 06:53:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:17.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:17.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:17.617 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:17.618 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:17.618 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:10:17.618 00:10:17.618 --- 10.0.0.2 ping statistics --- 00:10:17.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.618 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:10:17.618 00:10:17.618 --- 10.0.0.1 ping statistics --- 00:10:17.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.618 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2995458 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2995458 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:17.618 06:53:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2995458 ']' 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 [2024-10-16 06:53:16.056010] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:10:17.618 [2024-10-16 06:53:16.056075] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.618 [2024-10-16 06:53:16.146392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.618 [2024-10-16 06:53:16.199494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.618 [2024-10-16 06:53:16.199549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.618 [2024-10-16 06:53:16.199558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.618 [2024-10-16 06:53:16.199566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.618 [2024-10-16 06:53:16.199572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.618 [2024-10-16 06:53:16.201693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:17.618 [2024-10-16 06:53:16.201875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:17.618 [2024-10-16 06:53:16.202015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:17.618 [2024-10-16 06:53:16.202151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 [2024-10-16 06:53:16.945742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 Malloc0 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.618 06:53:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.618 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.618 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.618 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.618 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.618 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.618 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.618 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.619 [2024-10-16 06:53:17.029140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:17.619 { 00:10:17.619 "params": { 00:10:17.619 "name": "Nvme$subsystem", 00:10:17.619 "trtype": "$TEST_TRANSPORT", 00:10:17.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.619 "adrfam": "ipv4", 00:10:17.619 "trsvcid": "$NVMF_PORT", 00:10:17.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.619 "hdgst": ${hdgst:-false}, 00:10:17.619 "ddgst": ${ddgst:-false} 00:10:17.619 }, 00:10:17.619 "method": "bdev_nvme_attach_controller" 00:10:17.619 } 00:10:17.619 EOF 00:10:17.619 )") 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:17.619 06:53:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:17.619 "params": { 00:10:17.619 "name": "Nvme1", 00:10:17.619 "trtype": "tcp", 00:10:17.619 "traddr": "10.0.0.2", 00:10:17.619 "adrfam": "ipv4", 00:10:17.619 "trsvcid": "4420", 00:10:17.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.619 "hdgst": false, 00:10:17.619 "ddgst": false 00:10:17.619 }, 00:10:17.619 "method": "bdev_nvme_attach_controller" 00:10:17.619 }' 00:10:17.619 [2024-10-16 06:53:17.086822] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:10:17.619 [2024-10-16 06:53:17.086902] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995751 ] 00:10:17.886 [2024-10-16 06:53:17.171332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.886 [2024-10-16 06:53:17.228719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.886 [2024-10-16 06:53:17.228936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.886 [2024-10-16 06:53:17.228954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.147 I/O targets: 00:10:18.147 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:18.147 00:10:18.147 00:10:18.147 CUnit - A unit testing framework for C - Version 2.1-3 00:10:18.147 http://cunit.sourceforge.net/ 00:10:18.147 00:10:18.147 00:10:18.147 Suite: bdevio tests on: Nvme1n1 00:10:18.147 Test: blockdev write read block ...passed 00:10:18.147 Test: blockdev write zeroes read block ...passed 00:10:18.147 Test: blockdev write zeroes read no split ...passed 00:10:18.147 Test: blockdev write zeroes read split ...passed 00:10:18.147 Test: blockdev write zeroes read split partial ...passed 00:10:18.147 Test: blockdev reset ...[2024-10-16 06:53:17.533836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:18.147 [2024-10-16 06:53:17.533934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184e0d0 (9): Bad file descriptor 00:10:18.147 [2024-10-16 06:53:17.549131] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:18.147 passed 00:10:18.147 Test: blockdev write read 8 blocks ...passed 00:10:18.147 Test: blockdev write read size > 128k ...passed 00:10:18.147 Test: blockdev write read invalid size ...passed 00:10:18.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:18.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:18.147 Test: blockdev write read max offset ...passed 00:10:18.409 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:18.409 Test: blockdev writev readv 8 blocks ...passed 00:10:18.409 Test: blockdev writev readv 30 x 1block ...passed 00:10:18.409 Test: blockdev writev readv block ...passed 00:10:18.409 Test: blockdev writev readv size > 128k ...passed 00:10:18.409 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:18.409 Test: blockdev comparev and writev ...[2024-10-16 06:53:17.816189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.409 [2024-10-16 06:53:17.816239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.816257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.409 [2024-10-16 06:53:17.816268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.816734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.409 [2024-10-16 06:53:17.816749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.816763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.409 [2024-10-16 06:53:17.816772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.817332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.409 [2024-10-16 06:53:17.817347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.817362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.409 [2024-10-16 06:53:17.817371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.817796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.409 [2024-10-16 06:53:17.817809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.817822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:18.409 [2024-10-16 06:53:17.817839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:18.409 passed 00:10:18.409 Test: blockdev nvme passthru rw ...passed 00:10:18.409 Test: blockdev nvme passthru vendor specific ...[2024-10-16 06:53:17.902662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:18.409 [2024-10-16 06:53:17.902678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.902942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:18.409 [2024-10-16 06:53:17.902954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.903174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:18.409 [2024-10-16 06:53:17.903184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:18.409 [2024-10-16 06:53:17.903415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:18.409 [2024-10-16 06:53:17.903426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:18.410 passed 00:10:18.671 Test: blockdev nvme admin passthru ...passed 00:10:18.671 Test: blockdev copy ...passed 00:10:18.671 00:10:18.671 Run Summary: Type Total Ran Passed Failed Inactive 00:10:18.671 suites 1 1 n/a 0 0 00:10:18.671 tests 23 23 23 0 0 00:10:18.671 asserts 152 152 152 0 n/a 00:10:18.671 00:10:18.671 Elapsed time = 1.122 seconds 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.671 rmmod nvme_tcp 00:10:18.671 rmmod nvme_fabrics 00:10:18.671 rmmod nvme_keyring 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2995458 ']' 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2995458 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2995458 ']' 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2995458 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.671 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2995458 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2995458' 00:10:18.933 killing process with pid 2995458 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2995458 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2995458 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.933 06:53:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.482 06:53:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:21.482 00:10:21.482 real 0m11.967s 00:10:21.482 user 0m12.664s 00:10:21.482 sys 0m6.127s 00:10:21.482 06:53:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.482 06:53:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.482 ************************************ 00:10:21.482 END TEST nvmf_bdevio 00:10:21.482 ************************************ 00:10:21.482 06:53:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:21.482 00:10:21.482 real 5m4.518s 00:10:21.482 user 11m49.382s 00:10:21.482 sys 1m50.054s 00:10:21.482 06:53:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.482 06:53:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.482 ************************************ 00:10:21.482 END TEST nvmf_target_core 00:10:21.482 ************************************ 00:10:21.483 06:53:20 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:21.483 06:53:20 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.483 06:53:20 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.483 06:53:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:21.483 ************************************ 00:10:21.483 START TEST nvmf_target_extra 00:10:21.483 ************************************ 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:21.483 * Looking for test storage... 00:10:21.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:21.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.483 --rc genhtml_branch_coverage=1 00:10:21.483 --rc genhtml_function_coverage=1 00:10:21.483 --rc genhtml_legend=1 00:10:21.483 --rc geninfo_all_blocks=1 00:10:21.483 --rc geninfo_unexecuted_blocks=1 00:10:21.483 00:10:21.483 ' 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:21.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.483 --rc genhtml_branch_coverage=1 00:10:21.483 --rc genhtml_function_coverage=1 00:10:21.483 --rc genhtml_legend=1 00:10:21.483 --rc geninfo_all_blocks=1 00:10:21.483 --rc geninfo_unexecuted_blocks=1 00:10:21.483 00:10:21.483 ' 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:21.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.483 --rc genhtml_branch_coverage=1 00:10:21.483 --rc genhtml_function_coverage=1 00:10:21.483 --rc genhtml_legend=1 00:10:21.483 --rc geninfo_all_blocks=1 00:10:21.483 --rc geninfo_unexecuted_blocks=1 00:10:21.483 00:10:21.483 ' 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:21.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.483 --rc genhtml_branch_coverage=1 00:10:21.483 --rc genhtml_function_coverage=1 00:10:21.483 --rc genhtml_legend=1 00:10:21.483 --rc geninfo_all_blocks=1 00:10:21.483 --rc geninfo_unexecuted_blocks=1 00:10:21.483 00:10:21.483 ' 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.483 06:53:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:21.484 ************************************ 00:10:21.484 START TEST nvmf_example 00:10:21.484 ************************************ 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:21.484 * Looking for test storage... 00:10:21.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:21.484 06:53:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.746 --rc genhtml_branch_coverage=1 00:10:21.746 --rc genhtml_function_coverage=1 00:10:21.746 --rc genhtml_legend=1 00:10:21.746 --rc geninfo_all_blocks=1 00:10:21.746 --rc geninfo_unexecuted_blocks=1 00:10:21.746 00:10:21.746 ' 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.746 --rc genhtml_branch_coverage=1 00:10:21.746 --rc genhtml_function_coverage=1 00:10:21.746 --rc genhtml_legend=1 00:10:21.746 --rc geninfo_all_blocks=1 00:10:21.746 --rc geninfo_unexecuted_blocks=1 00:10:21.746 00:10:21.746 ' 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.746 --rc genhtml_branch_coverage=1 00:10:21.746 --rc genhtml_function_coverage=1 00:10:21.746 --rc genhtml_legend=1 00:10:21.746 --rc geninfo_all_blocks=1 00:10:21.746 --rc geninfo_unexecuted_blocks=1 00:10:21.746 00:10:21.746 ' 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.746 --rc genhtml_branch_coverage=1 00:10:21.746 --rc genhtml_function_coverage=1 00:10:21.746 --rc genhtml_legend=1 00:10:21.746 --rc geninfo_all_blocks=1 00:10:21.746 --rc geninfo_unexecuted_blocks=1 00:10:21.746 00:10:21.746 ' 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.746 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.747 06:53:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:29.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:29.893 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:29.893 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:29.893 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:10:29.893 00:10:29.893 --- 10.0.0.2 ping statistics --- 00:10:29.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.893 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:10:29.893 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:10:29.893 00:10:29.893 --- 10.0.0.1 ping statistics --- 00:10:29.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.894 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3000399 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3000399 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3000399 ']' 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.894 06:53:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:30.154 06:53:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:42.389 Initializing NVMe Controllers 00:10:42.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:42.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:42.389 Initialization complete. Launching workers. 00:10:42.389 ======================================================== 00:10:42.389 Latency(us) 00:10:42.389 Device Information : IOPS MiB/s Average min max 00:10:42.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19196.37 74.99 3334.05 636.62 19436.30 00:10:42.389 ======================================================== 00:10:42.389 Total : 19196.37 74.99 3334.05 636.62 19436.30 00:10:42.389 00:10:42.389 06:53:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:42.389 06:53:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:42.389 06:53:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:42.389 06:53:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:42.389 06:53:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.389 06:53:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:42.389 06:53:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.389 06:53:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.389 rmmod nvme_tcp 00:10:42.389 rmmod nvme_fabrics 00:10:42.389 rmmod nvme_keyring 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 3000399 ']' 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 3000399 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3000399 ']' 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3000399 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3000399 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3000399' 00:10:42.389 killing process with pid 3000399 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3000399 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3000399 00:10:42.389 nvmf threads initialize successfully 00:10:42.389 bdev subsystem init successfully 00:10:42.389 created a nvmf target service 00:10:42.389 create targets's poll groups done 00:10:42.389 all subsystems of target started 00:10:42.389 nvmf target is running 00:10:42.389 all subsystems of target stopped 00:10:42.389 destroy targets's poll groups done 00:10:42.389 destroyed the nvmf target service 00:10:42.389 bdev subsystem finish successfully 00:10:42.389 nvmf threads destroy successfully 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.389 06:53:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.963 00:10:42.963 real 0m21.520s 00:10:42.963 user 0m47.016s 00:10:42.963 sys 0m7.048s 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.963 ************************************ 00:10:42.963 END TEST nvmf_example 00:10:42.963 ************************************ 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:42.963 ************************************ 00:10:42.963 START TEST nvmf_filesystem 00:10:42.963 ************************************ 00:10:42.963 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:43.227 * Looking for test storage... 00:10:43.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:43.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.227 --rc genhtml_branch_coverage=1 00:10:43.227 --rc genhtml_function_coverage=1 00:10:43.227 --rc genhtml_legend=1 00:10:43.227 --rc geninfo_all_blocks=1 00:10:43.227 --rc geninfo_unexecuted_blocks=1 00:10:43.227 00:10:43.227 ' 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:43.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.227 --rc genhtml_branch_coverage=1 00:10:43.227 --rc genhtml_function_coverage=1 00:10:43.227 --rc genhtml_legend=1 00:10:43.227 --rc geninfo_all_blocks=1 00:10:43.227 --rc geninfo_unexecuted_blocks=1 00:10:43.227 00:10:43.227 ' 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:43.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.227 --rc genhtml_branch_coverage=1 00:10:43.227 --rc genhtml_function_coverage=1 00:10:43.227 --rc genhtml_legend=1 00:10:43.227 --rc geninfo_all_blocks=1 00:10:43.227 --rc geninfo_unexecuted_blocks=1 00:10:43.227 00:10:43.227 ' 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:43.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.227 --rc genhtml_branch_coverage=1 00:10:43.227 --rc genhtml_function_coverage=1 00:10:43.227 --rc genhtml_legend=1 00:10:43.227 --rc geninfo_all_blocks=1 00:10:43.227 --rc geninfo_unexecuted_blocks=1 00:10:43.227 00:10:43.227 ' 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:43.227 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:43.228 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:43.228 #define SPDK_CONFIG_H 00:10:43.228 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:43.228 #define SPDK_CONFIG_APPS 1 00:10:43.228 #define SPDK_CONFIG_ARCH native 00:10:43.228 #undef SPDK_CONFIG_ASAN 00:10:43.228 #undef SPDK_CONFIG_AVAHI 00:10:43.228 #undef SPDK_CONFIG_CET 00:10:43.228 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:43.228 #define SPDK_CONFIG_COVERAGE 1 00:10:43.228 #define SPDK_CONFIG_CROSS_PREFIX 00:10:43.228 #undef SPDK_CONFIG_CRYPTO 00:10:43.228 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:43.228 #undef SPDK_CONFIG_CUSTOMOCF 00:10:43.228 #undef SPDK_CONFIG_DAOS 00:10:43.228 #define SPDK_CONFIG_DAOS_DIR 00:10:43.228 #define SPDK_CONFIG_DEBUG 1 00:10:43.228 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:43.228 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:43.228 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:43.228 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:43.228 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:43.228 #undef SPDK_CONFIG_DPDK_UADK 00:10:43.228 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:43.228 #define SPDK_CONFIG_EXAMPLES 1 00:10:43.228 #undef SPDK_CONFIG_FC 00:10:43.228 #define SPDK_CONFIG_FC_PATH 00:10:43.228 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:43.228 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:43.228 #define SPDK_CONFIG_FSDEV 1 00:10:43.228 #undef SPDK_CONFIG_FUSE 00:10:43.228 #undef SPDK_CONFIG_FUZZER 00:10:43.228 #define SPDK_CONFIG_FUZZER_LIB 00:10:43.228 #undef SPDK_CONFIG_GOLANG 00:10:43.228 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:43.228 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:43.228 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:43.228 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:43.228 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:43.228 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:43.228 #undef SPDK_CONFIG_HAVE_LZ4 00:10:43.228 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:43.228 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:43.228 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:43.228 #define SPDK_CONFIG_IDXD 1 00:10:43.228 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:43.228 #undef SPDK_CONFIG_IPSEC_MB 00:10:43.228 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:43.228 #define SPDK_CONFIG_ISAL 1 00:10:43.228 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:43.228 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:43.228 #define SPDK_CONFIG_LIBDIR 00:10:43.228 #undef SPDK_CONFIG_LTO 00:10:43.228 #define SPDK_CONFIG_MAX_LCORES 128 00:10:43.228 #define SPDK_CONFIG_NVME_CUSE 1 00:10:43.228 #undef SPDK_CONFIG_OCF 00:10:43.228 #define SPDK_CONFIG_OCF_PATH 00:10:43.228 #define SPDK_CONFIG_OPENSSL_PATH 00:10:43.228 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:43.228 #define SPDK_CONFIG_PGO_DIR 00:10:43.228 #undef SPDK_CONFIG_PGO_USE 00:10:43.228 #define SPDK_CONFIG_PREFIX /usr/local 00:10:43.228 #undef SPDK_CONFIG_RAID5F 00:10:43.228 #undef SPDK_CONFIG_RBD 00:10:43.228 #define SPDK_CONFIG_RDMA 1 00:10:43.228 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:43.228 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:43.229 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:43.229 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:43.229 #define SPDK_CONFIG_SHARED 1 00:10:43.229 #undef SPDK_CONFIG_SMA 00:10:43.229 #define SPDK_CONFIG_TESTS 1 00:10:43.229 #undef SPDK_CONFIG_TSAN 00:10:43.229 #define SPDK_CONFIG_UBLK 1 00:10:43.229 #define SPDK_CONFIG_UBSAN 1 00:10:43.229 #undef SPDK_CONFIG_UNIT_TESTS 00:10:43.229 #undef SPDK_CONFIG_URING 00:10:43.229 #define SPDK_CONFIG_URING_PATH 00:10:43.229 #undef SPDK_CONFIG_URING_ZNS 00:10:43.229 #undef SPDK_CONFIG_USDT 00:10:43.229 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:43.229 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:43.229 #define SPDK_CONFIG_VFIO_USER 1 00:10:43.229 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:43.229 #define SPDK_CONFIG_VHOST 1 00:10:43.229 #define SPDK_CONFIG_VIRTIO 1 00:10:43.229 #undef SPDK_CONFIG_VTUNE 00:10:43.229 #define SPDK_CONFIG_VTUNE_DIR 00:10:43.229 #define SPDK_CONFIG_WERROR 1 00:10:43.229 #define SPDK_CONFIG_WPDK_DIR 00:10:43.229 #undef SPDK_CONFIG_XNVME 00:10:43.229 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:43.229 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:43.493 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:43.494 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3003266 ]] 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3003266 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.tFmGtr 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tFmGtr/tests/target /tmp/spdk.tFmGtr 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=607141888 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677287936 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122621136896 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356541952 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6735405056 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668237824 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:43.495 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847947264 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677646336 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=626688 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:43.496 * Looking for test storage... 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122621136896 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8949997568 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.496 --rc genhtml_branch_coverage=1 00:10:43.496 --rc genhtml_function_coverage=1 00:10:43.496 --rc genhtml_legend=1 00:10:43.496 --rc geninfo_all_blocks=1 00:10:43.496 --rc geninfo_unexecuted_blocks=1 00:10:43.496 00:10:43.496 ' 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.496 --rc genhtml_branch_coverage=1 00:10:43.496 --rc genhtml_function_coverage=1 00:10:43.496 --rc genhtml_legend=1 00:10:43.496 --rc geninfo_all_blocks=1 00:10:43.496 --rc geninfo_unexecuted_blocks=1 00:10:43.496 00:10:43.496 ' 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.496 --rc genhtml_branch_coverage=1 00:10:43.496 --rc genhtml_function_coverage=1 00:10:43.496 --rc genhtml_legend=1 00:10:43.496 --rc geninfo_all_blocks=1 00:10:43.496 --rc geninfo_unexecuted_blocks=1 00:10:43.496 00:10:43.496 ' 00:10:43.496 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.496 --rc genhtml_branch_coverage=1 00:10:43.496 --rc genhtml_function_coverage=1 00:10:43.496 --rc genhtml_legend=1 00:10:43.496 --rc geninfo_all_blocks=1 00:10:43.496 --rc geninfo_unexecuted_blocks=1 00:10:43.496 00:10:43.497 ' 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.497 06:53:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:51.647 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:51.647 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:51.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:51.647 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.647 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:10:51.648 00:10:51.648 --- 10.0.0.2 ping statistics --- 00:10:51.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.648 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:10:51.648 00:10:51.648 --- 10.0.0.1 ping statistics --- 00:10:51.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.648 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.648 ************************************ 00:10:51.648 START TEST nvmf_filesystem_no_in_capsule 00:10:51.648 ************************************ 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3006897 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3006897 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3006897 ']' 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.648 [2024-10-16 06:53:50.559726] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:10:51.648 [2024-10-16 06:53:50.559785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.648 [2024-10-16 06:53:50.627379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.648 [2024-10-16 06:53:50.677410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.648 [2024-10-16 06:53:50.677462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.648 [2024-10-16 06:53:50.677469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.648 [2024-10-16 06:53:50.677475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.648 [2024-10-16 06:53:50.677479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.648 [2024-10-16 06:53:50.681872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.648 [2024-10-16 06:53:50.682215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.648 [2024-10-16 06:53:50.682378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.648 [2024-10-16 06:53:50.682379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.648 [2024-10-16 06:53:50.841546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.648 Malloc1 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.648 [2024-10-16 06:53:50.997281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.648 06:53:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.648 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:51.649 { 00:10:51.649 "name": "Malloc1", 00:10:51.649 "aliases": [ 00:10:51.649 "5ab9f490-d28e-4843-be83-06e18c925e8a" 00:10:51.649 ], 00:10:51.649 "product_name": "Malloc disk", 00:10:51.649 "block_size": 512, 00:10:51.649 "num_blocks": 1048576, 00:10:51.649 "uuid": "5ab9f490-d28e-4843-be83-06e18c925e8a", 00:10:51.649 "assigned_rate_limits": { 00:10:51.649 "rw_ios_per_sec": 0, 00:10:51.649 "rw_mbytes_per_sec": 0, 00:10:51.649 "r_mbytes_per_sec": 0, 00:10:51.649 "w_mbytes_per_sec": 0 00:10:51.649 }, 00:10:51.649 "claimed": true, 00:10:51.649 "claim_type": "exclusive_write", 00:10:51.649 "zoned": false, 00:10:51.649 "supported_io_types": { 00:10:51.649 "read": true, 00:10:51.649 "write": true, 00:10:51.649 "unmap": true, 00:10:51.649 "flush": true, 00:10:51.649 "reset": true, 00:10:51.649 "nvme_admin": false, 00:10:51.649 "nvme_io": false, 00:10:51.649 "nvme_io_md": false, 00:10:51.649 "write_zeroes": true, 00:10:51.649 "zcopy": true, 00:10:51.649 "get_zone_info": false, 00:10:51.649 "zone_management": false, 00:10:51.649 "zone_append": false, 00:10:51.649 "compare": false, 00:10:51.649 "compare_and_write": false, 00:10:51.649 "abort": true, 00:10:51.649 "seek_hole": false, 00:10:51.649 "seek_data": false, 00:10:51.649 "copy": true, 00:10:51.649 "nvme_iov_md": false 00:10:51.649 }, 00:10:51.649 "memory_domains": [ 00:10:51.649 { 00:10:51.649 "dma_device_id": "system", 00:10:51.649 "dma_device_type": 1 00:10:51.649 }, 00:10:51.649 { 00:10:51.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.649 "dma_device_type": 2 00:10:51.649 } 00:10:51.649 ], 00:10:51.649 "driver_specific": {} 00:10:51.649 } 00:10:51.649 ]' 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:51.649 06:53:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.566 06:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.566 06:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:53.566 06:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.566 06:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:53.566 06:53:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:55.482 06:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:55.742 06:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.684 ************************************ 00:10:56.684 START TEST filesystem_ext4 00:10:56.684 ************************************ 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:56.684 06:53:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:56.684 mke2fs 1.47.0 (5-Feb-2023) 00:10:56.946 Discarding device blocks: 0/522240 done 00:10:56.946 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:56.946 Filesystem UUID: 48afe804-39ed-4644-a55c-616d3acb0c9d 00:10:56.946 Superblock backups stored on blocks: 00:10:56.946 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:56.946 00:10:56.946 Allocating group tables: 0/64 done 00:10:56.946 Writing inode tables: 0/64 done 00:10:59.487 Creating journal (8192 blocks): done 00:11:01.879 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:01.879 00:11:01.879 06:54:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:01.879 06:54:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3006897 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.456 00:11:08.456 real 0m10.635s 00:11:08.456 user 0m0.032s 00:11:08.456 sys 0m0.080s 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:08.456 ************************************ 00:11:08.456 END TEST filesystem_ext4 00:11:08.456 ************************************ 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.456 ************************************ 00:11:08.456 START TEST filesystem_btrfs 00:11:08.456 ************************************ 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:08.456 06:54:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:08.456 btrfs-progs v6.8.1 00:11:08.456 See https://btrfs.readthedocs.io for more information. 00:11:08.456 00:11:08.456 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:08.456 NOTE: several default settings have changed in version 5.15, please make sure 00:11:08.456 this does not affect your deployments: 00:11:08.456 - DUP for metadata (-m dup) 00:11:08.456 - enabled no-holes (-O no-holes) 00:11:08.456 - enabled free-space-tree (-R free-space-tree) 00:11:08.456 00:11:08.456 Label: (null) 00:11:08.456 UUID: e13f9e56-1026-4279-8447-d9a63566ea91 00:11:08.456 Node size: 16384 00:11:08.456 Sector size: 4096 (CPU page size: 4096) 00:11:08.456 Filesystem size: 510.00MiB 00:11:08.456 Block group profiles: 00:11:08.456 Data: single 8.00MiB 00:11:08.456 Metadata: DUP 32.00MiB 00:11:08.456 System: DUP 8.00MiB 00:11:08.456 SSD detected: yes 00:11:08.456 Zoned device: no 00:11:08.456 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:08.456 Checksum: crc32c 00:11:08.456 Number of devices: 1 00:11:08.456 Devices: 00:11:08.456 ID SIZE PATH 00:11:08.456 1 510.00MiB /dev/nvme0n1p1 00:11:08.456 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3006897 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.456 00:11:08.456 real 0m0.908s 00:11:08.456 user 0m0.039s 00:11:08.456 sys 0m0.109s 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:08.456 ************************************ 00:11:08.456 END TEST filesystem_btrfs 00:11:08.456 ************************************ 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.456 ************************************ 00:11:08.456 START TEST filesystem_xfs 00:11:08.456 ************************************ 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:08.456 06:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:08.456 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:08.456 = sectsz=512 attr=2, projid32bit=1 00:11:08.456 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:08.456 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:08.456 data = bsize=4096 blocks=130560, imaxpct=25 00:11:08.456 = sunit=0 swidth=0 blks 00:11:08.456 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:08.456 log =internal log bsize=4096 blocks=16384, version=2 00:11:08.456 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:08.456 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:09.397 Discarding blocks...Done. 00:11:09.397 06:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:09.397 06:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.943 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.943 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:11.943 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.943 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:11.943 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:11.943 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.943 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3006897 00:11:11.943 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.943 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.944 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.944 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.944 00:11:11.944 real 0m3.459s 00:11:11.944 user 0m0.029s 00:11:11.944 sys 0m0.078s 00:11:11.944 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.944 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:11.944 ************************************ 00:11:11.944 END TEST filesystem_xfs 00:11:11.944 ************************************ 00:11:11.944 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:12.204 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:12.204 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.204 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.204 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:12.204 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:12.204 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3006897 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3006897 ']' 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3006897 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3006897 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3006897' 00:11:12.205 killing process with pid 3006897 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3006897 00:11:12.205 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3006897 00:11:12.466 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:12.466 00:11:12.466 real 0m21.402s 00:11:12.466 user 1m24.608s 00:11:12.466 sys 0m1.427s 00:11:12.466 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.466 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.466 ************************************ 00:11:12.466 END TEST nvmf_filesystem_no_in_capsule 00:11:12.466 ************************************ 00:11:12.466 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:12.466 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:12.466 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.466 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.729 ************************************ 00:11:12.729 START TEST nvmf_filesystem_in_capsule 00:11:12.729 ************************************ 00:11:12.729 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:12.729 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:12.729 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:12.729 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:12.729 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:12.729 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.729 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3012013 00:11:12.730 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3012013 00:11:12.730 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.730 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3012013 ']' 00:11:12.730 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.730 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.730 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.730 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.730 06:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.730 [2024-10-16 06:54:12.046216] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:11:12.730 [2024-10-16 06:54:12.046265] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.730 [2024-10-16 06:54:12.133532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.730 [2024-10-16 06:54:12.163568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.730 [2024-10-16 06:54:12.163595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.730 [2024-10-16 06:54:12.163601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.730 [2024-10-16 06:54:12.163605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.730 [2024-10-16 06:54:12.163609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.730 [2024-10-16 06:54:12.164870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.730 [2024-10-16 06:54:12.164962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.730 [2024-10-16 06:54:12.165206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.730 [2024-10-16 06:54:12.165209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.674 [2024-10-16 06:54:12.877936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.674 Malloc1 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.674 06:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.674 [2024-10-16 06:54:13.000957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:13.674 { 00:11:13.674 "name": "Malloc1", 00:11:13.674 "aliases": [ 00:11:13.674 "98d7fd5e-43e5-4321-b754-c73eb8990a43" 00:11:13.674 ], 00:11:13.674 "product_name": "Malloc disk", 00:11:13.674 "block_size": 512, 00:11:13.674 "num_blocks": 1048576, 00:11:13.674 "uuid": "98d7fd5e-43e5-4321-b754-c73eb8990a43", 00:11:13.674 "assigned_rate_limits": { 00:11:13.674 "rw_ios_per_sec": 0, 00:11:13.674 "rw_mbytes_per_sec": 0, 00:11:13.674 "r_mbytes_per_sec": 0, 00:11:13.674 "w_mbytes_per_sec": 0 00:11:13.674 }, 00:11:13.674 "claimed": true, 00:11:13.674 "claim_type": "exclusive_write", 00:11:13.674 "zoned": false, 00:11:13.674 "supported_io_types": { 00:11:13.674 "read": true, 00:11:13.674 "write": true, 00:11:13.674 "unmap": true, 00:11:13.674 "flush": true, 00:11:13.674 "reset": true, 00:11:13.674 "nvme_admin": false, 00:11:13.674 "nvme_io": false, 00:11:13.674 "nvme_io_md": false, 00:11:13.674 "write_zeroes": true, 00:11:13.674 "zcopy": true, 00:11:13.674 "get_zone_info": false, 00:11:13.674 "zone_management": false, 00:11:13.674 "zone_append": false, 00:11:13.674 "compare": false, 00:11:13.674 "compare_and_write": false, 00:11:13.674 "abort": true, 00:11:13.674 "seek_hole": false, 00:11:13.674 "seek_data": false, 00:11:13.674 "copy": true, 00:11:13.674 "nvme_iov_md": false 00:11:13.674 }, 00:11:13.674 "memory_domains": [ 00:11:13.674 { 00:11:13.674 "dma_device_id": "system", 00:11:13.674 "dma_device_type": 1 00:11:13.674 }, 00:11:13.674 { 00:11:13.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.674 "dma_device_type": 2 00:11:13.674 } 00:11:13.674 ], 00:11:13.674 "driver_specific": {} 00:11:13.674 } 00:11:13.674 ]' 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:13.674 06:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.586 06:54:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.586 06:54:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:15.586 06:54:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.586 06:54:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:15.586 06:54:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:17.498 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:17.499 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:17.499 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:17.499 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:17.499 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:17.499 06:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:17.759 06:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.146 ************************************ 00:11:19.146 START TEST filesystem_in_capsule_ext4 00:11:19.146 ************************************ 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:19.146 06:54:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:19.146 mke2fs 1.47.0 (5-Feb-2023) 00:11:19.146 Discarding device blocks: 0/522240 done 00:11:19.146 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:19.146 Filesystem UUID: 4656ed07-ddb2-4e4e-977a-1c0280401890 00:11:19.146 Superblock backups stored on blocks: 00:11:19.146 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:19.146 00:11:19.146 Allocating group tables: 0/64 done 00:11:19.146 Writing inode tables: 0/64 done 00:11:21.064 Creating journal (8192 blocks): done 00:11:21.064 Writing superblocks and filesystem accounting information: 0/64 done 00:11:21.064 00:11:21.064 06:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:21.064 06:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3012013 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.646 00:11:27.646 real 0m8.298s 00:11:27.646 user 0m0.037s 00:11:27.646 sys 0m0.070s 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:27.646 ************************************ 00:11:27.646 END TEST filesystem_in_capsule_ext4 00:11:27.646 ************************************ 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.646 ************************************ 00:11:27.646 START TEST filesystem_in_capsule_btrfs 00:11:27.646 ************************************ 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:27.646 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:27.646 btrfs-progs v6.8.1 00:11:27.646 See https://btrfs.readthedocs.io for more information. 00:11:27.646 00:11:27.646 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:27.646 NOTE: several default settings have changed in version 5.15, please make sure 00:11:27.646 this does not affect your deployments: 00:11:27.646 - DUP for metadata (-m dup) 00:11:27.646 - enabled no-holes (-O no-holes) 00:11:27.646 - enabled free-space-tree (-R free-space-tree) 00:11:27.646 00:11:27.646 Label: (null) 00:11:27.646 UUID: 0681a5ba-a852-4b8f-86cb-8c5f9e73d624 00:11:27.646 Node size: 16384 00:11:27.646 Sector size: 4096 (CPU page size: 4096) 00:11:27.646 Filesystem size: 510.00MiB 00:11:27.646 Block group profiles: 00:11:27.646 Data: single 8.00MiB 00:11:27.647 Metadata: DUP 32.00MiB 00:11:27.647 System: DUP 8.00MiB 00:11:27.647 SSD detected: yes 00:11:27.647 Zoned device: no 00:11:27.647 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:27.647 Checksum: crc32c 00:11:27.647 Number of devices: 1 00:11:27.647 Devices: 00:11:27.647 ID SIZE PATH 00:11:27.647 1 510.00MiB /dev/nvme0n1p1 00:11:27.647 00:11:27.647 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:27.647 06:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.907 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.907 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:27.907 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3012013 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.908 00:11:27.908 real 0m0.728s 00:11:27.908 user 0m0.033s 00:11:27.908 sys 0m0.116s 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.908 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.908 ************************************ 00:11:27.908 END TEST filesystem_in_capsule_btrfs 00:11:27.908 ************************************ 00:11:28.168 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:28.168 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:28.168 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.168 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.168 ************************************ 00:11:28.168 START TEST filesystem_in_capsule_xfs 00:11:28.168 ************************************ 00:11:28.168 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:28.168 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:28.168 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.168 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:28.168 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:28.169 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:28.169 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:28.169 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:28.169 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:28.169 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:28.169 06:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:28.169 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:28.169 = sectsz=512 attr=2, projid32bit=1 00:11:28.169 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:28.169 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:28.169 data = bsize=4096 blocks=130560, imaxpct=25 00:11:28.169 = sunit=0 swidth=0 blks 00:11:28.169 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:28.169 log =internal log bsize=4096 blocks=16384, version=2 00:11:28.169 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:28.169 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:29.111 Discarding blocks...Done. 00:11:29.111 06:54:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:29.111 06:54:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3012013 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.656 00:11:31.656 real 0m3.421s 00:11:31.656 user 0m0.029s 00:11:31.656 sys 0m0.078s 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:31.656 ************************************ 00:11:31.656 END TEST filesystem_in_capsule_xfs 00:11:31.656 ************************************ 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:31.656 06:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3012013 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3012013 ']' 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3012013 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:31.656 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.916 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3012013 00:11:31.917 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:31.917 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:31.917 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3012013' 00:11:31.917 killing process with pid 3012013 00:11:31.917 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3012013 00:11:31.917 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3012013 00:11:31.917 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:31.917 00:11:31.917 real 0m19.439s 00:11:31.917 user 1m16.865s 00:11:31.917 sys 0m1.456s 00:11:31.917 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.917 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.917 ************************************ 00:11:31.917 END TEST nvmf_filesystem_in_capsule 00:11:31.917 ************************************ 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.178 rmmod nvme_tcp 00:11:32.178 rmmod nvme_fabrics 00:11:32.178 rmmod nvme_keyring 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.178 06:54:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.721 00:11:34.721 real 0m51.153s 00:11:34.721 user 2m43.889s 00:11:34.721 sys 0m8.754s 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.721 ************************************ 00:11:34.721 END TEST nvmf_filesystem 00:11:34.721 ************************************ 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.721 ************************************ 00:11:34.721 START TEST nvmf_target_discovery 00:11:34.721 ************************************ 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:34.721 * Looking for test storage... 00:11:34.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:34.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.721 --rc genhtml_branch_coverage=1 00:11:34.721 --rc genhtml_function_coverage=1 00:11:34.721 --rc genhtml_legend=1 00:11:34.721 --rc geninfo_all_blocks=1 00:11:34.721 --rc geninfo_unexecuted_blocks=1 00:11:34.721 00:11:34.721 ' 00:11:34.721 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:34.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.721 --rc genhtml_branch_coverage=1 00:11:34.721 --rc genhtml_function_coverage=1 00:11:34.721 --rc genhtml_legend=1 00:11:34.721 --rc geninfo_all_blocks=1 00:11:34.721 --rc geninfo_unexecuted_blocks=1 00:11:34.722 00:11:34.722 ' 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:34.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.722 --rc genhtml_branch_coverage=1 00:11:34.722 --rc genhtml_function_coverage=1 00:11:34.722 --rc genhtml_legend=1 00:11:34.722 --rc geninfo_all_blocks=1 00:11:34.722 --rc geninfo_unexecuted_blocks=1 00:11:34.722 00:11:34.722 ' 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:34.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.722 --rc genhtml_branch_coverage=1 00:11:34.722 --rc genhtml_function_coverage=1 00:11:34.722 --rc genhtml_legend=1 00:11:34.722 --rc geninfo_all_blocks=1 00:11:34.722 --rc geninfo_unexecuted_blocks=1 00:11:34.722 00:11:34.722 ' 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.722 06:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:42.861 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:42.861 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:42.861 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:42.861 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.861 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:11:42.862 00:11:42.862 --- 10.0.0.2 ping statistics --- 00:11:42.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.862 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:11:42.862 00:11:42.862 --- 10.0.0.1 ping statistics --- 00:11:42.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.862 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=3020155 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 3020155 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3020155 ']' 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.862 06:54:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.862 [2024-10-16 06:54:41.465728] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:11:42.862 [2024-10-16 06:54:41.465798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.862 [2024-10-16 06:54:41.553688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.862 [2024-10-16 06:54:41.606885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.862 [2024-10-16 06:54:41.606960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.862 [2024-10-16 06:54:41.606970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.862 [2024-10-16 06:54:41.606977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.862 [2024-10-16 06:54:41.606983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.862 [2024-10-16 06:54:41.609039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.862 [2024-10-16 06:54:41.609199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.862 [2024-10-16 06:54:41.609359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.862 [2024-10-16 06:54:41.609359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.862 [2024-10-16 06:54:42.341021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.862 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.123 Null1 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 [2024-10-16 06:54:42.401562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 Null2 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 Null3 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 Null4 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.124 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:43.386 00:11:43.386 Discovery Log Number of Records 6, Generation counter 6 00:11:43.386 =====Discovery Log Entry 0====== 00:11:43.386 trtype: tcp 00:11:43.386 adrfam: ipv4 00:11:43.386 subtype: current discovery subsystem 00:11:43.386 treq: not required 00:11:43.386 portid: 0 00:11:43.386 trsvcid: 4420 00:11:43.386 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:43.386 traddr: 10.0.0.2 00:11:43.386 eflags: explicit discovery connections, duplicate discovery information 00:11:43.386 sectype: none 00:11:43.386 =====Discovery Log Entry 1====== 00:11:43.386 trtype: tcp 00:11:43.386 adrfam: ipv4 00:11:43.386 subtype: nvme subsystem 00:11:43.386 treq: not required 00:11:43.386 portid: 0 00:11:43.386 trsvcid: 4420 00:11:43.386 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:43.386 traddr: 10.0.0.2 00:11:43.386 eflags: none 00:11:43.386 sectype: none 00:11:43.386 =====Discovery Log Entry 2====== 00:11:43.386 trtype: tcp 00:11:43.386 adrfam: ipv4 00:11:43.386 subtype: nvme subsystem 00:11:43.386 treq: not required 00:11:43.386 portid: 0 00:11:43.386 trsvcid: 4420 00:11:43.386 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:43.386 traddr: 10.0.0.2 00:11:43.386 eflags: none 00:11:43.386 sectype: none 00:11:43.386 =====Discovery Log Entry 3====== 00:11:43.386 trtype: tcp 00:11:43.386 adrfam: ipv4 00:11:43.386 subtype: nvme subsystem 00:11:43.386 treq: not required 00:11:43.386 portid: 0 00:11:43.386 trsvcid: 4420 00:11:43.386 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:43.386 traddr: 10.0.0.2 00:11:43.386 eflags: none 00:11:43.386 sectype: none 00:11:43.386 =====Discovery Log Entry 4====== 00:11:43.386 trtype: tcp 00:11:43.386 adrfam: ipv4 00:11:43.386 subtype: nvme subsystem 00:11:43.386 treq: not required 00:11:43.386 portid: 0 00:11:43.386 trsvcid: 4420 00:11:43.386 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:43.386 traddr: 10.0.0.2 00:11:43.386 eflags: none 00:11:43.386 sectype: none 00:11:43.386 =====Discovery Log Entry 5====== 00:11:43.386 trtype: tcp 00:11:43.386 adrfam: ipv4 00:11:43.386 subtype: discovery subsystem referral 00:11:43.386 treq: not required 00:11:43.386 portid: 0 00:11:43.386 trsvcid: 4430 00:11:43.386 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:43.386 traddr: 10.0.0.2 00:11:43.386 eflags: none 00:11:43.386 sectype: none 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:43.386 Perform nvmf subsystem discovery via RPC 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.386 [ 00:11:43.386 { 00:11:43.386 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:43.386 "subtype": "Discovery", 00:11:43.386 "listen_addresses": [ 00:11:43.386 { 00:11:43.386 "trtype": "TCP", 00:11:43.386 "adrfam": "IPv4", 00:11:43.386 "traddr": "10.0.0.2", 00:11:43.386 "trsvcid": "4420" 00:11:43.386 } 00:11:43.386 ], 00:11:43.386 "allow_any_host": true, 00:11:43.386 "hosts": [] 00:11:43.386 }, 00:11:43.386 { 00:11:43.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:43.386 "subtype": "NVMe", 00:11:43.386 "listen_addresses": [ 00:11:43.386 { 00:11:43.386 "trtype": "TCP", 00:11:43.386 "adrfam": "IPv4", 00:11:43.386 "traddr": "10.0.0.2", 00:11:43.386 "trsvcid": "4420" 00:11:43.386 } 00:11:43.386 ], 00:11:43.386 "allow_any_host": true, 00:11:43.386 "hosts": [], 00:11:43.386 "serial_number": "SPDK00000000000001", 00:11:43.386 "model_number": "SPDK bdev Controller", 00:11:43.386 "max_namespaces": 32, 00:11:43.386 "min_cntlid": 1, 00:11:43.386 "max_cntlid": 65519, 00:11:43.386 "namespaces": [ 00:11:43.386 { 00:11:43.386 "nsid": 1, 00:11:43.386 "bdev_name": "Null1", 00:11:43.386 "name": "Null1", 00:11:43.386 "nguid": "C8A22F49BE6849B9BB0B8279DCB1C68D", 00:11:43.386 "uuid": "c8a22f49-be68-49b9-bb0b-8279dcb1c68d" 00:11:43.386 } 00:11:43.386 ] 00:11:43.386 }, 00:11:43.386 { 00:11:43.386 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:43.386 "subtype": "NVMe", 00:11:43.386 "listen_addresses": [ 00:11:43.386 { 00:11:43.386 "trtype": "TCP", 00:11:43.386 "adrfam": "IPv4", 00:11:43.386 "traddr": "10.0.0.2", 00:11:43.386 "trsvcid": "4420" 00:11:43.386 } 00:11:43.386 ], 00:11:43.386 "allow_any_host": true, 00:11:43.386 "hosts": [], 00:11:43.386 "serial_number": "SPDK00000000000002", 00:11:43.386 "model_number": "SPDK bdev Controller", 00:11:43.386 "max_namespaces": 32, 00:11:43.386 "min_cntlid": 1, 00:11:43.386 "max_cntlid": 65519, 00:11:43.386 "namespaces": [ 00:11:43.386 { 00:11:43.386 "nsid": 1, 00:11:43.386 "bdev_name": "Null2", 00:11:43.386 "name": "Null2", 00:11:43.386 "nguid": "1BB870089A784F9BA79F634666C44D6E", 00:11:43.386 "uuid": "1bb87008-9a78-4f9b-a79f-634666c44d6e" 00:11:43.386 } 00:11:43.386 ] 00:11:43.386 }, 00:11:43.386 { 00:11:43.386 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:43.386 "subtype": "NVMe", 00:11:43.386 "listen_addresses": [ 00:11:43.386 { 00:11:43.386 "trtype": "TCP", 00:11:43.386 "adrfam": "IPv4", 00:11:43.386 "traddr": "10.0.0.2", 00:11:43.386 "trsvcid": "4420" 00:11:43.386 } 00:11:43.386 ], 00:11:43.386 "allow_any_host": true, 00:11:43.386 "hosts": [], 00:11:43.386 "serial_number": "SPDK00000000000003", 00:11:43.386 "model_number": "SPDK bdev Controller", 00:11:43.386 "max_namespaces": 32, 00:11:43.386 "min_cntlid": 1, 00:11:43.386 "max_cntlid": 65519, 00:11:43.386 "namespaces": [ 00:11:43.386 { 00:11:43.386 "nsid": 1, 00:11:43.386 "bdev_name": "Null3", 00:11:43.386 "name": "Null3", 00:11:43.386 "nguid": "729BC473EB394C0F972F3F4C8D644D82", 00:11:43.386 "uuid": "729bc473-eb39-4c0f-972f-3f4c8d644d82" 00:11:43.386 } 00:11:43.386 ] 00:11:43.386 }, 00:11:43.386 { 00:11:43.386 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:43.386 "subtype": "NVMe", 00:11:43.386 "listen_addresses": [ 00:11:43.386 { 00:11:43.386 "trtype": "TCP", 00:11:43.386 "adrfam": "IPv4", 00:11:43.386 "traddr": "10.0.0.2", 00:11:43.386 "trsvcid": "4420" 00:11:43.386 } 00:11:43.386 ], 00:11:43.386 "allow_any_host": true, 00:11:43.386 "hosts": [], 00:11:43.386 "serial_number": "SPDK00000000000004", 00:11:43.386 "model_number": "SPDK bdev Controller", 00:11:43.386 "max_namespaces": 32, 00:11:43.386 "min_cntlid": 1, 00:11:43.386 "max_cntlid": 65519, 00:11:43.386 "namespaces": [ 00:11:43.386 { 00:11:43.386 "nsid": 1, 00:11:43.386 "bdev_name": "Null4", 00:11:43.386 "name": "Null4", 00:11:43.386 "nguid": "40BDC76F0B4C47E2A182CA32BB9CE19D", 00:11:43.386 "uuid": "40bdc76f-0b4c-47e2-a182-ca32bb9ce19d" 00:11:43.386 } 00:11:43.386 ] 00:11:43.386 } 00:11:43.386 ] 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.386 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:43.387 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.648 rmmod nvme_tcp 00:11:43.648 rmmod nvme_fabrics 00:11:43.648 rmmod nvme_keyring 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 3020155 ']' 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 3020155 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3020155 ']' 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3020155 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.648 06:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3020155 00:11:43.648 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:43.648 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:43.648 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3020155' 00:11:43.648 killing process with pid 3020155 00:11:43.648 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3020155 00:11:43.648 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3020155 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.909 06:54:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.826 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.826 00:11:45.826 real 0m11.604s 00:11:45.826 user 0m8.684s 00:11:45.826 sys 0m6.145s 00:11:45.826 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.826 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.826 ************************************ 00:11:45.826 END TEST nvmf_target_discovery 00:11:45.826 ************************************ 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.088 ************************************ 00:11:46.088 START TEST nvmf_referrals 00:11:46.088 ************************************ 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:46.088 * Looking for test storage... 00:11:46.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.088 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:46.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.089 --rc genhtml_branch_coverage=1 00:11:46.089 --rc genhtml_function_coverage=1 00:11:46.089 --rc genhtml_legend=1 00:11:46.089 --rc geninfo_all_blocks=1 00:11:46.089 --rc geninfo_unexecuted_blocks=1 00:11:46.089 00:11:46.089 ' 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:46.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.089 --rc genhtml_branch_coverage=1 00:11:46.089 --rc genhtml_function_coverage=1 00:11:46.089 --rc genhtml_legend=1 00:11:46.089 --rc geninfo_all_blocks=1 00:11:46.089 --rc geninfo_unexecuted_blocks=1 00:11:46.089 00:11:46.089 ' 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:46.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.089 --rc genhtml_branch_coverage=1 00:11:46.089 --rc genhtml_function_coverage=1 00:11:46.089 --rc genhtml_legend=1 00:11:46.089 --rc geninfo_all_blocks=1 00:11:46.089 --rc geninfo_unexecuted_blocks=1 00:11:46.089 00:11:46.089 ' 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:46.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.089 --rc genhtml_branch_coverage=1 00:11:46.089 --rc genhtml_function_coverage=1 00:11:46.089 --rc genhtml_legend=1 00:11:46.089 --rc geninfo_all_blocks=1 00:11:46.089 --rc geninfo_unexecuted_blocks=1 00:11:46.089 00:11:46.089 ' 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.089 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.351 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:46.351 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.352 06:54:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:54.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.525 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:54.526 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:54.526 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:54.526 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:54.526 06:54:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:54.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:11:54.526 00:11:54.526 --- 10.0.0.2 ping statistics --- 00:11:54.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.526 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:11:54.526 00:11:54.526 --- 10.0.0.1 ping statistics --- 00:11:54.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.526 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=3024686 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 3024686 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3024686 ']' 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.526 06:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.526 [2024-10-16 06:54:53.236604] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:11:54.526 [2024-10-16 06:54:53.236671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.526 [2024-10-16 06:54:53.327123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.526 [2024-10-16 06:54:53.380114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.526 [2024-10-16 06:54:53.380160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.526 [2024-10-16 06:54:53.380170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.526 [2024-10-16 06:54:53.380177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.526 [2024-10-16 06:54:53.380183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.526 [2024-10-16 06:54:53.382255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.526 [2024-10-16 06:54:53.382416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.526 [2024-10-16 06:54:53.382579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.526 [2024-10-16 06:54:53.382579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.789 [2024-10-16 06:54:54.114131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.789 [2024-10-16 06:54:54.130466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.789 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.051 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:55.313 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:55.575 06:54:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.575 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:55.575 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:55.575 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:55.575 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:55.575 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:55.575 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.575 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:55.837 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:55.837 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:55.837 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:55.837 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:55.837 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.837 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.099 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.360 06:54:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.620 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.881 rmmod nvme_tcp 00:11:56.881 rmmod nvme_fabrics 00:11:56.881 rmmod nvme_keyring 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 3024686 ']' 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 3024686 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3024686 ']' 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3024686 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.881 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3024686 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3024686' 00:11:57.142 killing process with pid 3024686 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3024686 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3024686 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.142 06:54:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.688 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.688 00:11:59.688 real 0m13.270s 00:11:59.688 user 0m15.829s 00:11:59.688 sys 0m6.585s 00:11:59.688 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.688 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.688 ************************************ 00:11:59.688 END TEST nvmf_referrals 00:11:59.689 ************************************ 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.689 ************************************ 00:11:59.689 START TEST nvmf_connect_disconnect 00:11:59.689 ************************************ 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:59.689 * Looking for test storage... 00:11:59.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:59.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.689 --rc genhtml_branch_coverage=1 00:11:59.689 --rc genhtml_function_coverage=1 00:11:59.689 --rc genhtml_legend=1 00:11:59.689 --rc geninfo_all_blocks=1 00:11:59.689 --rc geninfo_unexecuted_blocks=1 00:11:59.689 00:11:59.689 ' 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:59.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.689 --rc genhtml_branch_coverage=1 00:11:59.689 --rc genhtml_function_coverage=1 00:11:59.689 --rc genhtml_legend=1 00:11:59.689 --rc geninfo_all_blocks=1 00:11:59.689 --rc geninfo_unexecuted_blocks=1 00:11:59.689 00:11:59.689 ' 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:59.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.689 --rc genhtml_branch_coverage=1 00:11:59.689 --rc genhtml_function_coverage=1 00:11:59.689 --rc genhtml_legend=1 00:11:59.689 --rc geninfo_all_blocks=1 00:11:59.689 --rc geninfo_unexecuted_blocks=1 00:11:59.689 00:11:59.689 ' 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:59.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.689 --rc genhtml_branch_coverage=1 00:11:59.689 --rc genhtml_function_coverage=1 00:11:59.689 --rc genhtml_legend=1 00:11:59.689 --rc geninfo_all_blocks=1 00:11:59.689 --rc geninfo_unexecuted_blocks=1 00:11:59.689 00:11:59.689 ' 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.689 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.690 06:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:07.832 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.832 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:07.832 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:07.833 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:07.833 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:12:07.833 00:12:07.833 --- 10.0.0.2 ping statistics --- 00:12:07.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.833 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:12:07.833 00:12:07.833 --- 10.0.0.1 ping statistics --- 00:12:07.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.833 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=3029684 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 3029684 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3029684 ']' 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.833 06:55:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.833 [2024-10-16 06:55:06.572676] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:12:07.833 [2024-10-16 06:55:06.572739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.833 [2024-10-16 06:55:06.663340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.833 [2024-10-16 06:55:06.717737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.833 [2024-10-16 06:55:06.717797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.833 [2024-10-16 06:55:06.717807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.833 [2024-10-16 06:55:06.717815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.833 [2024-10-16 06:55:06.717822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.833 [2024-10-16 06:55:06.720449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.833 [2024-10-16 06:55:06.720615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.833 [2024-10-16 06:55:06.720780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.833 [2024-10-16 06:55:06.720781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.105 [2024-10-16 06:55:07.456139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.105 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.106 [2024-10-16 06:55:07.535491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:08.106 06:55:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:12.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.555 rmmod nvme_tcp 00:12:26.555 rmmod nvme_fabrics 00:12:26.555 rmmod nvme_keyring 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 3029684 ']' 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 3029684 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3029684 ']' 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3029684 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3029684 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.555 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.556 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3029684' 00:12:26.556 killing process with pid 3029684 00:12:26.556 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3029684 00:12:26.556 06:55:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3029684 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.556 06:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.101 00:12:29.101 real 0m29.393s 00:12:29.101 user 1m19.216s 00:12:29.101 sys 0m7.088s 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.101 ************************************ 00:12:29.101 END TEST nvmf_connect_disconnect 00:12:29.101 ************************************ 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.101 ************************************ 00:12:29.101 START TEST nvmf_multitarget 00:12:29.101 ************************************ 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.101 * Looking for test storage... 00:12:29.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:29.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.101 --rc genhtml_branch_coverage=1 00:12:29.101 --rc genhtml_function_coverage=1 00:12:29.101 --rc genhtml_legend=1 00:12:29.101 --rc geninfo_all_blocks=1 00:12:29.101 --rc geninfo_unexecuted_blocks=1 00:12:29.101 00:12:29.101 ' 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:29.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.101 --rc genhtml_branch_coverage=1 00:12:29.101 --rc genhtml_function_coverage=1 00:12:29.101 --rc genhtml_legend=1 00:12:29.101 --rc geninfo_all_blocks=1 00:12:29.101 --rc geninfo_unexecuted_blocks=1 00:12:29.101 00:12:29.101 ' 00:12:29.101 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:29.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.101 --rc genhtml_branch_coverage=1 00:12:29.101 --rc genhtml_function_coverage=1 00:12:29.102 --rc genhtml_legend=1 00:12:29.102 --rc geninfo_all_blocks=1 00:12:29.102 --rc geninfo_unexecuted_blocks=1 00:12:29.102 00:12:29.102 ' 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:29.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.102 --rc genhtml_branch_coverage=1 00:12:29.102 --rc genhtml_function_coverage=1 00:12:29.102 --rc genhtml_legend=1 00:12:29.102 --rc geninfo_all_blocks=1 00:12:29.102 --rc geninfo_unexecuted_blocks=1 00:12:29.102 00:12:29.102 ' 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.102 06:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:37.252 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:37.252 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:37.252 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:37.252 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:37.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:12:37.252 00:12:37.252 --- 10.0.0.2 ping statistics --- 00:12:37.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.252 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:12:37.252 00:12:37.252 --- 10.0.0.1 ping statistics --- 00:12:37.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.252 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=3037592 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 3037592 00:12:37.252 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.253 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3037592 ']' 00:12:37.253 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.253 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.253 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.253 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.253 06:55:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:37.253 [2024-10-16 06:55:36.020701] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:12:37.253 [2024-10-16 06:55:36.020770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.253 [2024-10-16 06:55:36.112092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.253 [2024-10-16 06:55:36.164960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.253 [2024-10-16 06:55:36.165014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.253 [2024-10-16 06:55:36.165022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.253 [2024-10-16 06:55:36.165030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.253 [2024-10-16 06:55:36.165036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.253 [2024-10-16 06:55:36.167478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.253 [2024-10-16 06:55:36.167614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.253 [2024-10-16 06:55:36.167777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.253 [2024-10-16 06:55:36.167778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.514 06:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:37.514 06:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:37.514 06:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:37.514 06:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:37.514 06:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:37.514 06:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.514 06:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:37.514 06:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.514 06:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:37.515 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:37.515 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:37.776 "nvmf_tgt_1" 00:12:37.776 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:37.776 "nvmf_tgt_2" 00:12:37.776 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.776 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:38.036 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:38.036 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:38.036 true 00:12:38.036 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:38.296 true 00:12:38.296 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.296 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:38.296 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:38.296 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:38.297 rmmod nvme_tcp 00:12:38.297 rmmod nvme_fabrics 00:12:38.297 rmmod nvme_keyring 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 3037592 ']' 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 3037592 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3037592 ']' 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3037592 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:38.297 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3037592 00:12:38.558 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:38.558 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:38.558 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3037592' 00:12:38.558 killing process with pid 3037592 00:12:38.558 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3037592 00:12:38.558 06:55:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3037592 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.558 06:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:41.105 00:12:41.105 real 0m11.894s 00:12:41.105 user 0m10.256s 00:12:41.105 sys 0m6.324s 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.105 ************************************ 00:12:41.105 END TEST nvmf_multitarget 00:12:41.105 ************************************ 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:41.105 ************************************ 00:12:41.105 START TEST nvmf_rpc 00:12:41.105 ************************************ 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:41.105 * Looking for test storage... 00:12:41.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:41.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.105 --rc genhtml_branch_coverage=1 00:12:41.105 --rc genhtml_function_coverage=1 00:12:41.105 --rc genhtml_legend=1 00:12:41.105 --rc geninfo_all_blocks=1 00:12:41.105 --rc geninfo_unexecuted_blocks=1 00:12:41.105 00:12:41.105 ' 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:41.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.105 --rc genhtml_branch_coverage=1 00:12:41.105 --rc genhtml_function_coverage=1 00:12:41.105 --rc genhtml_legend=1 00:12:41.105 --rc geninfo_all_blocks=1 00:12:41.105 --rc geninfo_unexecuted_blocks=1 00:12:41.105 00:12:41.105 ' 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:41.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.105 --rc genhtml_branch_coverage=1 00:12:41.105 --rc genhtml_function_coverage=1 00:12:41.105 --rc genhtml_legend=1 00:12:41.105 --rc geninfo_all_blocks=1 00:12:41.105 --rc geninfo_unexecuted_blocks=1 00:12:41.105 00:12:41.105 ' 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:41.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.105 --rc genhtml_branch_coverage=1 00:12:41.105 --rc genhtml_function_coverage=1 00:12:41.105 --rc genhtml_legend=1 00:12:41.105 --rc geninfo_all_blocks=1 00:12:41.105 --rc geninfo_unexecuted_blocks=1 00:12:41.105 00:12:41.105 ' 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.105 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:41.106 06:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:49.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:49.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:49.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.251 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:49.252 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:49.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:12:49.252 00:12:49.252 --- 10.0.0.2 ping statistics --- 00:12:49.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.252 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:12:49.252 00:12:49.252 --- 10.0.0.1 ping statistics --- 00:12:49.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.252 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=3042271 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 3042271 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3042271 ']' 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.252 06:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.252 [2024-10-16 06:55:47.995866] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:12:49.252 [2024-10-16 06:55:47.995935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.252 [2024-10-16 06:55:48.086688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.252 [2024-10-16 06:55:48.139907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.252 [2024-10-16 06:55:48.139965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.252 [2024-10-16 06:55:48.139975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.252 [2024-10-16 06:55:48.139982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.252 [2024-10-16 06:55:48.139989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.252 [2024-10-16 06:55:48.142455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.252 [2024-10-16 06:55:48.142615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.252 [2024-10-16 06:55:48.142777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.252 [2024-10-16 06:55:48.142777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:49.513 "tick_rate": 2400000000, 00:12:49.513 "poll_groups": [ 00:12:49.513 { 00:12:49.513 "name": "nvmf_tgt_poll_group_000", 00:12:49.513 "admin_qpairs": 0, 00:12:49.513 "io_qpairs": 0, 00:12:49.513 "current_admin_qpairs": 0, 00:12:49.513 "current_io_qpairs": 0, 00:12:49.513 "pending_bdev_io": 0, 00:12:49.513 "completed_nvme_io": 0, 00:12:49.513 "transports": [] 00:12:49.513 }, 00:12:49.513 { 00:12:49.513 "name": "nvmf_tgt_poll_group_001", 00:12:49.513 "admin_qpairs": 0, 00:12:49.513 "io_qpairs": 0, 00:12:49.513 "current_admin_qpairs": 0, 00:12:49.513 "current_io_qpairs": 0, 00:12:49.513 "pending_bdev_io": 0, 00:12:49.513 "completed_nvme_io": 0, 00:12:49.513 "transports": [] 00:12:49.513 }, 00:12:49.513 { 00:12:49.513 "name": "nvmf_tgt_poll_group_002", 00:12:49.513 "admin_qpairs": 0, 00:12:49.513 "io_qpairs": 0, 00:12:49.513 "current_admin_qpairs": 0, 00:12:49.513 "current_io_qpairs": 0, 00:12:49.513 "pending_bdev_io": 0, 00:12:49.513 "completed_nvme_io": 0, 00:12:49.513 "transports": [] 00:12:49.513 }, 00:12:49.513 { 00:12:49.513 "name": "nvmf_tgt_poll_group_003", 00:12:49.513 "admin_qpairs": 0, 00:12:49.513 "io_qpairs": 0, 00:12:49.513 "current_admin_qpairs": 0, 00:12:49.513 "current_io_qpairs": 0, 00:12:49.513 "pending_bdev_io": 0, 00:12:49.513 "completed_nvme_io": 0, 00:12:49.513 "transports": [] 00:12:49.513 } 00:12:49.513 ] 00:12:49.513 }' 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.513 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.514 [2024-10-16 06:55:48.983473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.514 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.514 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:49.514 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.514 06:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.514 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.774 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:49.774 "tick_rate": 2400000000, 00:12:49.774 "poll_groups": [ 00:12:49.774 { 00:12:49.774 "name": "nvmf_tgt_poll_group_000", 00:12:49.774 "admin_qpairs": 0, 00:12:49.774 "io_qpairs": 0, 00:12:49.774 "current_admin_qpairs": 0, 00:12:49.774 "current_io_qpairs": 0, 00:12:49.774 "pending_bdev_io": 0, 00:12:49.774 "completed_nvme_io": 0, 00:12:49.774 "transports": [ 00:12:49.774 { 00:12:49.774 "trtype": "TCP" 00:12:49.774 } 00:12:49.774 ] 00:12:49.774 }, 00:12:49.774 { 00:12:49.774 "name": "nvmf_tgt_poll_group_001", 00:12:49.774 "admin_qpairs": 0, 00:12:49.774 "io_qpairs": 0, 00:12:49.774 "current_admin_qpairs": 0, 00:12:49.774 "current_io_qpairs": 0, 00:12:49.774 "pending_bdev_io": 0, 00:12:49.774 "completed_nvme_io": 0, 00:12:49.774 "transports": [ 00:12:49.774 { 00:12:49.774 "trtype": "TCP" 00:12:49.774 } 00:12:49.774 ] 00:12:49.774 }, 00:12:49.774 { 00:12:49.774 "name": "nvmf_tgt_poll_group_002", 00:12:49.774 "admin_qpairs": 0, 00:12:49.774 "io_qpairs": 0, 00:12:49.774 "current_admin_qpairs": 0, 00:12:49.774 "current_io_qpairs": 0, 00:12:49.774 "pending_bdev_io": 0, 00:12:49.774 "completed_nvme_io": 0, 00:12:49.774 "transports": [ 00:12:49.774 { 00:12:49.774 "trtype": "TCP" 00:12:49.774 } 00:12:49.774 ] 00:12:49.774 }, 00:12:49.774 { 00:12:49.774 "name": "nvmf_tgt_poll_group_003", 00:12:49.774 "admin_qpairs": 0, 00:12:49.774 "io_qpairs": 0, 00:12:49.774 "current_admin_qpairs": 0, 00:12:49.774 "current_io_qpairs": 0, 00:12:49.774 "pending_bdev_io": 0, 00:12:49.774 "completed_nvme_io": 0, 00:12:49.774 "transports": [ 00:12:49.774 { 00:12:49.774 "trtype": "TCP" 00:12:49.774 } 00:12:49.774 ] 00:12:49.774 } 00:12:49.774 ] 00:12:49.774 }' 00:12:49.774 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:49.774 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:49.774 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:49.774 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.774 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:49.774 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:49.774 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 Malloc1 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 [2024-10-16 06:55:49.190021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:49.775 [2024-10-16 06:55:49.226954] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:49.775 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:49.775 could not add new controller: failed to write to nvme-fabrics device 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 06:55:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.687 06:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.687 06:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.687 06:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.687 06:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:51.687 06:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:53.598 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:53.599 06:55:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.599 [2024-10-16 06:55:52.983700] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:53.599 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:53.599 could not add new controller: failed to write to nvme-fabrics device 00:12:53.599 06:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:53.599 06:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:53.599 06:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:53.599 06:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:53.599 06:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:53.599 06:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.599 06:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.599 06:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.599 06:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.510 06:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.510 06:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.510 06:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.510 06:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.510 06:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.423 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.424 [2024-10-16 06:55:56.708989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.424 06:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.810 06:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.810 06:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.810 06:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.810 06:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:58.810 06:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.356 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.357 [2024-10-16 06:56:00.427811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.357 06:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.742 06:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.742 06:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.742 06:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.742 06:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:02.742 06:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:04.657 06:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:04.657 06:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:04.657 06:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.657 06:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:04.657 06:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.657 06:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:04.657 06:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.657 [2024-10-16 06:56:04.145429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.657 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.918 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.918 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.918 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.918 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.918 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.918 06:56:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.302 06:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.302 06:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.302 06:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.302 06:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.302 06:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.214 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.214 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.214 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.214 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.214 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.214 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:08.214 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.475 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.476 [2024-10-16 06:56:07.868111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.476 06:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.390 06:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.390 06:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.390 06:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.390 06:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.390 06:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.304 [2024-10-16 06:56:11.619182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.304 06:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.217 06:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.217 06:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.217 06:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.217 06:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:14.217 06:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.126 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 [2024-10-16 06:56:15.394467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 [2024-10-16 06:56:15.462609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 [2024-10-16 06:56:15.530805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 [2024-10-16 06:56:15.603036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.127 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 [2024-10-16 06:56:15.667236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.388 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:16.388 "tick_rate": 2400000000, 00:13:16.388 "poll_groups": [ 00:13:16.388 { 00:13:16.388 "name": "nvmf_tgt_poll_group_000", 00:13:16.388 "admin_qpairs": 0, 00:13:16.388 "io_qpairs": 224, 00:13:16.388 "current_admin_qpairs": 0, 00:13:16.388 "current_io_qpairs": 0, 00:13:16.388 "pending_bdev_io": 0, 00:13:16.388 "completed_nvme_io": 225, 00:13:16.388 "transports": [ 00:13:16.388 { 00:13:16.388 "trtype": "TCP" 00:13:16.388 } 00:13:16.388 ] 00:13:16.388 }, 00:13:16.388 { 00:13:16.388 "name": "nvmf_tgt_poll_group_001", 00:13:16.388 "admin_qpairs": 1, 00:13:16.388 "io_qpairs": 223, 00:13:16.388 "current_admin_qpairs": 0, 00:13:16.388 "current_io_qpairs": 0, 00:13:16.388 "pending_bdev_io": 0, 00:13:16.388 "completed_nvme_io": 273, 00:13:16.388 "transports": [ 00:13:16.388 { 00:13:16.388 "trtype": "TCP" 00:13:16.388 } 00:13:16.388 ] 00:13:16.388 }, 00:13:16.388 { 00:13:16.388 "name": "nvmf_tgt_poll_group_002", 00:13:16.388 "admin_qpairs": 6, 00:13:16.388 "io_qpairs": 218, 00:13:16.388 "current_admin_qpairs": 0, 00:13:16.388 "current_io_qpairs": 0, 00:13:16.388 "pending_bdev_io": 0, 00:13:16.388 "completed_nvme_io": 513, 00:13:16.388 "transports": [ 00:13:16.388 { 00:13:16.388 "trtype": "TCP" 00:13:16.388 } 00:13:16.388 ] 00:13:16.388 }, 00:13:16.388 { 00:13:16.388 "name": "nvmf_tgt_poll_group_003", 00:13:16.388 "admin_qpairs": 0, 00:13:16.388 "io_qpairs": 224, 00:13:16.388 "current_admin_qpairs": 0, 00:13:16.388 "current_io_qpairs": 0, 00:13:16.388 "pending_bdev_io": 0, 00:13:16.388 "completed_nvme_io": 228, 00:13:16.388 "transports": [ 00:13:16.388 { 00:13:16.388 "trtype": "TCP" 00:13:16.388 } 00:13:16.388 ] 00:13:16.388 } 00:13:16.388 ] 00:13:16.389 }' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:16.389 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:16.389 rmmod nvme_tcp 00:13:16.389 rmmod nvme_fabrics 00:13:16.389 rmmod nvme_keyring 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 3042271 ']' 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 3042271 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3042271 ']' 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3042271 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3042271 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3042271' 00:13:16.649 killing process with pid 3042271 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3042271 00:13:16.649 06:56:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3042271 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.649 06:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.197 00:13:19.197 real 0m37.995s 00:13:19.197 user 1m53.798s 00:13:19.197 sys 0m7.866s 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.197 ************************************ 00:13:19.197 END TEST nvmf_rpc 00:13:19.197 ************************************ 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.197 ************************************ 00:13:19.197 START TEST nvmf_invalid 00:13:19.197 ************************************ 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:19.197 * Looking for test storage... 00:13:19.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.197 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:19.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.197 --rc genhtml_branch_coverage=1 00:13:19.197 --rc genhtml_function_coverage=1 00:13:19.197 --rc genhtml_legend=1 00:13:19.197 --rc geninfo_all_blocks=1 00:13:19.197 --rc geninfo_unexecuted_blocks=1 00:13:19.197 00:13:19.197 ' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:19.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.198 --rc genhtml_branch_coverage=1 00:13:19.198 --rc genhtml_function_coverage=1 00:13:19.198 --rc genhtml_legend=1 00:13:19.198 --rc geninfo_all_blocks=1 00:13:19.198 --rc geninfo_unexecuted_blocks=1 00:13:19.198 00:13:19.198 ' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:19.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.198 --rc genhtml_branch_coverage=1 00:13:19.198 --rc genhtml_function_coverage=1 00:13:19.198 --rc genhtml_legend=1 00:13:19.198 --rc geninfo_all_blocks=1 00:13:19.198 --rc geninfo_unexecuted_blocks=1 00:13:19.198 00:13:19.198 ' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:19.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.198 --rc genhtml_branch_coverage=1 00:13:19.198 --rc genhtml_function_coverage=1 00:13:19.198 --rc genhtml_legend=1 00:13:19.198 --rc geninfo_all_blocks=1 00:13:19.198 --rc geninfo_unexecuted_blocks=1 00:13:19.198 00:13:19.198 ' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:19.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:19.198 06:56:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:27.347 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:27.348 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:27.348 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:27.348 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:27.348 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:13:27.348 00:13:27.348 --- 10.0.0.2 ping statistics --- 00:13:27.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.348 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:13:27.348 00:13:27.348 --- 10.0.0.1 ping statistics --- 00:13:27.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.348 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:27.348 06:56:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:27.348 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:27.348 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:27.348 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.348 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.348 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=3052141 00:13:27.348 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 3052141 00:13:27.348 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.348 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3052141 ']' 00:13:27.348 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.349 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.349 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.349 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.349 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.349 [2024-10-16 06:56:26.096608] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:13:27.349 [2024-10-16 06:56:26.096680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.349 [2024-10-16 06:56:26.187682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.349 [2024-10-16 06:56:26.240788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.349 [2024-10-16 06:56:26.240840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.349 [2024-10-16 06:56:26.240857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.349 [2024-10-16 06:56:26.240871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.349 [2024-10-16 06:56:26.240877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.349 [2024-10-16 06:56:26.242890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.349 [2024-10-16 06:56:26.242981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.349 [2024-10-16 06:56:26.243143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.349 [2024-10-16 06:56:26.243143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.610 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.610 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:27.610 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:27.610 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.610 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.610 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.610 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:27.610 06:56:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19447 00:13:27.871 [2024-10-16 06:56:27.127117] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:27.871 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:27.871 { 00:13:27.871 "nqn": "nqn.2016-06.io.spdk:cnode19447", 00:13:27.871 "tgt_name": "foobar", 00:13:27.871 "method": "nvmf_create_subsystem", 00:13:27.871 "req_id": 1 00:13:27.871 } 00:13:27.871 Got JSON-RPC error response 00:13:27.871 response: 00:13:27.871 { 00:13:27.871 "code": -32603, 00:13:27.871 "message": "Unable to find target foobar" 00:13:27.871 }' 00:13:27.871 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:27.871 { 00:13:27.871 "nqn": "nqn.2016-06.io.spdk:cnode19447", 00:13:27.871 "tgt_name": "foobar", 00:13:27.871 "method": "nvmf_create_subsystem", 00:13:27.871 "req_id": 1 00:13:27.871 } 00:13:27.871 Got JSON-RPC error response 00:13:27.871 response: 00:13:27.871 { 00:13:27.871 "code": -32603, 00:13:27.871 "message": "Unable to find target foobar" 00:13:27.871 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:27.871 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:27.871 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1107 00:13:27.871 [2024-10-16 06:56:27.336033] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1107: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:28.133 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:28.134 { 00:13:28.134 "nqn": "nqn.2016-06.io.spdk:cnode1107", 00:13:28.134 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:28.134 "method": "nvmf_create_subsystem", 00:13:28.134 "req_id": 1 00:13:28.134 } 00:13:28.134 Got JSON-RPC error response 00:13:28.134 response: 00:13:28.134 { 00:13:28.134 "code": -32602, 00:13:28.134 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:28.134 }' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:28.134 { 00:13:28.134 "nqn": "nqn.2016-06.io.spdk:cnode1107", 00:13:28.134 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:28.134 "method": "nvmf_create_subsystem", 00:13:28.134 "req_id": 1 00:13:28.134 } 00:13:28.134 Got JSON-RPC error response 00:13:28.134 response: 00:13:28.134 { 00:13:28.134 "code": -32602, 00:13:28.134 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:28.134 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20430 00:13:28.134 [2024-10-16 06:56:27.544765] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20430: invalid model number 'SPDK_Controller' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:28.134 { 00:13:28.134 "nqn": "nqn.2016-06.io.spdk:cnode20430", 00:13:28.134 "model_number": "SPDK_Controller\u001f", 00:13:28.134 "method": "nvmf_create_subsystem", 00:13:28.134 "req_id": 1 00:13:28.134 } 00:13:28.134 Got JSON-RPC error response 00:13:28.134 response: 00:13:28.134 { 00:13:28.134 "code": -32602, 00:13:28.134 "message": "Invalid MN SPDK_Controller\u001f" 00:13:28.134 }' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:28.134 { 00:13:28.134 "nqn": "nqn.2016-06.io.spdk:cnode20430", 00:13:28.134 "model_number": "SPDK_Controller\u001f", 00:13:28.134 "method": "nvmf_create_subsystem", 00:13:28.134 "req_id": 1 00:13:28.134 } 00:13:28.134 Got JSON-RPC error response 00:13:28.134 response: 00:13:28.134 { 00:13:28.134 "code": -32602, 00:13:28.134 "message": "Invalid MN SPDK_Controller\u001f" 00:13:28.134 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:28.134 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.397 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.398 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:28.398 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:28.398 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:28.398 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.398 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.398 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:13:28.398 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z]FE3hXmD;{4A-/:Ga:25' 00:13:28.398 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'z]FE3hXmD;{4A-/:Ga:25' nqn.2016-06.io.spdk:cnode27570 00:13:28.660 [2024-10-16 06:56:27.922209] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27570: invalid serial number 'z]FE3hXmD;{4A-/:Ga:25' 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:28.660 { 00:13:28.660 "nqn": "nqn.2016-06.io.spdk:cnode27570", 00:13:28.660 "serial_number": "z]FE3hXmD;{4A-/:Ga:25", 00:13:28.660 "method": "nvmf_create_subsystem", 00:13:28.660 "req_id": 1 00:13:28.660 } 00:13:28.660 Got JSON-RPC error response 00:13:28.660 response: 00:13:28.660 { 00:13:28.660 "code": -32602, 00:13:28.660 "message": "Invalid SN z]FE3hXmD;{4A-/:Ga:25" 00:13:28.660 }' 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:28.660 { 00:13:28.660 "nqn": "nqn.2016-06.io.spdk:cnode27570", 00:13:28.660 "serial_number": "z]FE3hXmD;{4A-/:Ga:25", 00:13:28.660 "method": "nvmf_create_subsystem", 00:13:28.660 "req_id": 1 00:13:28.660 } 00:13:28.660 Got JSON-RPC error response 00:13:28.660 response: 00:13:28.660 { 00:13:28.660 "code": -32602, 00:13:28.660 "message": "Invalid SN z]FE3hXmD;{4A-/:Ga:25" 00:13:28.660 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.660 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.661 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:28.923 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'C$'\''C;Q%5SJ|NrG\a\8X@Ffx5~0}a7y#Ca"0jI^vm?' 00:13:28.924 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'C$'\''C;Q%5SJ|NrG\a\8X@Ffx5~0}a7y#Ca"0jI^vm?' nqn.2016-06.io.spdk:cnode22899 00:13:29.186 [2024-10-16 06:56:28.460290] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22899: invalid model number 'C$'C;Q%5SJ|NrG\a\8X@Ffx5~0}a7y#Ca"0jI^vm?' 00:13:29.186 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:29.186 { 00:13:29.186 "nqn": "nqn.2016-06.io.spdk:cnode22899", 00:13:29.186 "model_number": "C$'\''C;Q%5SJ|NrG\\a\\8X@Ffx5~0}a7y#Ca\"0jI^vm?", 00:13:29.186 "method": "nvmf_create_subsystem", 00:13:29.186 "req_id": 1 00:13:29.186 } 00:13:29.186 Got JSON-RPC error response 00:13:29.186 response: 00:13:29.186 { 00:13:29.186 "code": -32602, 00:13:29.186 "message": "Invalid MN C$'\''C;Q%5SJ|NrG\\a\\8X@Ffx5~0}a7y#Ca\"0jI^vm?" 00:13:29.186 }' 00:13:29.186 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:29.186 { 00:13:29.186 "nqn": "nqn.2016-06.io.spdk:cnode22899", 00:13:29.186 "model_number": "C$'C;Q%5SJ|NrG\\a\\8X@Ffx5~0}a7y#Ca\"0jI^vm?", 00:13:29.186 "method": "nvmf_create_subsystem", 00:13:29.186 "req_id": 1 00:13:29.186 } 00:13:29.186 Got JSON-RPC error response 00:13:29.186 response: 00:13:29.186 { 00:13:29.186 "code": -32602, 00:13:29.186 "message": "Invalid MN C$'C;Q%5SJ|NrG\\a\\8X@Ffx5~0}a7y#Ca\"0jI^vm?" 00:13:29.186 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:29.186 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:29.186 [2024-10-16 06:56:28.665237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.447 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:29.447 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:29.447 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:29.447 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:29.447 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:29.447 06:56:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:29.709 [2024-10-16 06:56:29.074882] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:29.709 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:29.709 { 00:13:29.709 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:29.709 "listen_address": { 00:13:29.709 "trtype": "tcp", 00:13:29.709 "traddr": "", 00:13:29.709 "trsvcid": "4421" 00:13:29.709 }, 00:13:29.709 "method": "nvmf_subsystem_remove_listener", 00:13:29.709 "req_id": 1 00:13:29.709 } 00:13:29.709 Got JSON-RPC error response 00:13:29.709 response: 00:13:29.709 { 00:13:29.709 "code": -32602, 00:13:29.709 "message": "Invalid parameters" 00:13:29.709 }' 00:13:29.709 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:29.709 { 00:13:29.709 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:29.709 "listen_address": { 00:13:29.709 "trtype": "tcp", 00:13:29.709 "traddr": "", 00:13:29.709 "trsvcid": "4421" 00:13:29.709 }, 00:13:29.709 "method": "nvmf_subsystem_remove_listener", 00:13:29.709 "req_id": 1 00:13:29.709 } 00:13:29.709 Got JSON-RPC error response 00:13:29.709 response: 00:13:29.709 { 00:13:29.709 "code": -32602, 00:13:29.709 "message": "Invalid parameters" 00:13:29.709 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:29.709 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31348 -i 0 00:13:29.970 [2024-10-16 06:56:29.279652] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31348: invalid cntlid range [0-65519] 00:13:29.970 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:29.970 { 00:13:29.970 "nqn": "nqn.2016-06.io.spdk:cnode31348", 00:13:29.970 "min_cntlid": 0, 00:13:29.970 "method": "nvmf_create_subsystem", 00:13:29.970 "req_id": 1 00:13:29.970 } 00:13:29.970 Got JSON-RPC error response 00:13:29.970 response: 00:13:29.970 { 00:13:29.970 "code": -32602, 00:13:29.970 "message": "Invalid cntlid range [0-65519]" 00:13:29.970 }' 00:13:29.970 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:29.970 { 00:13:29.970 "nqn": "nqn.2016-06.io.spdk:cnode31348", 00:13:29.970 "min_cntlid": 0, 00:13:29.970 "method": "nvmf_create_subsystem", 00:13:29.970 "req_id": 1 00:13:29.970 } 00:13:29.970 Got JSON-RPC error response 00:13:29.970 response: 00:13:29.970 { 00:13:29.970 "code": -32602, 00:13:29.970 "message": "Invalid cntlid range [0-65519]" 00:13:29.970 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:29.970 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2355 -i 65520 00:13:30.231 [2024-10-16 06:56:29.484334] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2355: invalid cntlid range [65520-65519] 00:13:30.231 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:30.231 { 00:13:30.231 "nqn": "nqn.2016-06.io.spdk:cnode2355", 00:13:30.231 "min_cntlid": 65520, 00:13:30.231 "method": "nvmf_create_subsystem", 00:13:30.231 "req_id": 1 00:13:30.231 } 00:13:30.231 Got JSON-RPC error response 00:13:30.231 response: 00:13:30.231 { 00:13:30.231 "code": -32602, 00:13:30.231 "message": "Invalid cntlid range [65520-65519]" 00:13:30.231 }' 00:13:30.231 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:30.231 { 00:13:30.231 "nqn": "nqn.2016-06.io.spdk:cnode2355", 00:13:30.231 "min_cntlid": 65520, 00:13:30.231 "method": "nvmf_create_subsystem", 00:13:30.231 "req_id": 1 00:13:30.231 } 00:13:30.231 Got JSON-RPC error response 00:13:30.231 response: 00:13:30.231 { 00:13:30.231 "code": -32602, 00:13:30.231 "message": "Invalid cntlid range [65520-65519]" 00:13:30.231 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.231 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19319 -I 0 00:13:30.231 [2024-10-16 06:56:29.713090] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19319: invalid cntlid range [1-0] 00:13:30.491 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:30.491 { 00:13:30.491 "nqn": "nqn.2016-06.io.spdk:cnode19319", 00:13:30.491 "max_cntlid": 0, 00:13:30.491 "method": "nvmf_create_subsystem", 00:13:30.491 "req_id": 1 00:13:30.491 } 00:13:30.491 Got JSON-RPC error response 00:13:30.491 response: 00:13:30.491 { 00:13:30.491 "code": -32602, 00:13:30.491 "message": "Invalid cntlid range [1-0]" 00:13:30.491 }' 00:13:30.491 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:30.491 { 00:13:30.491 "nqn": "nqn.2016-06.io.spdk:cnode19319", 00:13:30.491 "max_cntlid": 0, 00:13:30.491 "method": "nvmf_create_subsystem", 00:13:30.491 "req_id": 1 00:13:30.491 } 00:13:30.491 Got JSON-RPC error response 00:13:30.491 response: 00:13:30.491 { 00:13:30.491 "code": -32602, 00:13:30.491 "message": "Invalid cntlid range [1-0]" 00:13:30.491 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.491 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20117 -I 65520 00:13:30.491 [2024-10-16 06:56:29.901705] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20117: invalid cntlid range [1-65520] 00:13:30.491 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:30.491 { 00:13:30.491 "nqn": "nqn.2016-06.io.spdk:cnode20117", 00:13:30.491 "max_cntlid": 65520, 00:13:30.491 "method": "nvmf_create_subsystem", 00:13:30.492 "req_id": 1 00:13:30.492 } 00:13:30.492 Got JSON-RPC error response 00:13:30.492 response: 00:13:30.492 { 00:13:30.492 "code": -32602, 00:13:30.492 "message": "Invalid cntlid range [1-65520]" 00:13:30.492 }' 00:13:30.492 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:30.492 { 00:13:30.492 "nqn": "nqn.2016-06.io.spdk:cnode20117", 00:13:30.492 "max_cntlid": 65520, 00:13:30.492 "method": "nvmf_create_subsystem", 00:13:30.492 "req_id": 1 00:13:30.492 } 00:13:30.492 Got JSON-RPC error response 00:13:30.492 response: 00:13:30.492 { 00:13:30.492 "code": -32602, 00:13:30.492 "message": "Invalid cntlid range [1-65520]" 00:13:30.492 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.492 06:56:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21775 -i 6 -I 5 00:13:30.752 [2024-10-16 06:56:30.082698] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21775: invalid cntlid range [6-5] 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:30.752 { 00:13:30.752 "nqn": "nqn.2016-06.io.spdk:cnode21775", 00:13:30.752 "min_cntlid": 6, 00:13:30.752 "max_cntlid": 5, 00:13:30.752 "method": "nvmf_create_subsystem", 00:13:30.752 "req_id": 1 00:13:30.752 } 00:13:30.752 Got JSON-RPC error response 00:13:30.752 response: 00:13:30.752 { 00:13:30.752 "code": -32602, 00:13:30.752 "message": "Invalid cntlid range [6-5]" 00:13:30.752 }' 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:30.752 { 00:13:30.752 "nqn": "nqn.2016-06.io.spdk:cnode21775", 00:13:30.752 "min_cntlid": 6, 00:13:30.752 "max_cntlid": 5, 00:13:30.752 "method": "nvmf_create_subsystem", 00:13:30.752 "req_id": 1 00:13:30.752 } 00:13:30.752 Got JSON-RPC error response 00:13:30.752 response: 00:13:30.752 { 00:13:30.752 "code": -32602, 00:13:30.752 "message": "Invalid cntlid range [6-5]" 00:13:30.752 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:30.752 { 00:13:30.752 "name": "foobar", 00:13:30.752 "method": "nvmf_delete_target", 00:13:30.752 "req_id": 1 00:13:30.752 } 00:13:30.752 Got JSON-RPC error response 00:13:30.752 response: 00:13:30.752 { 00:13:30.752 "code": -32602, 00:13:30.752 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:30.752 }' 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:30.752 { 00:13:30.752 "name": "foobar", 00:13:30.752 "method": "nvmf_delete_target", 00:13:30.752 "req_id": 1 00:13:30.752 } 00:13:30.752 Got JSON-RPC error response 00:13:30.752 response: 00:13:30.752 { 00:13:30.752 "code": -32602, 00:13:30.752 "message": "The specified target doesn't exist, cannot delete it." 00:13:30.752 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:30.752 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:30.753 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:30.753 rmmod nvme_tcp 00:13:30.753 rmmod nvme_fabrics 00:13:31.013 rmmod nvme_keyring 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 3052141 ']' 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 3052141 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3052141 ']' 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3052141 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3052141 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3052141' 00:13:31.013 killing process with pid 3052141 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3052141 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3052141 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:31.013 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:31.014 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:31.014 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:31.014 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:31.014 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:31.014 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:31.014 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.014 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.014 06:56:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:33.558 00:13:33.558 real 0m14.277s 00:13:33.558 user 0m21.583s 00:13:33.558 sys 0m6.743s 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.558 ************************************ 00:13:33.558 END TEST nvmf_invalid 00:13:33.558 ************************************ 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:33.558 ************************************ 00:13:33.558 START TEST nvmf_connect_stress 00:13:33.558 ************************************ 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:33.558 * Looking for test storage... 00:13:33.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:33.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.558 --rc genhtml_branch_coverage=1 00:13:33.558 --rc genhtml_function_coverage=1 00:13:33.558 --rc genhtml_legend=1 00:13:33.558 --rc geninfo_all_blocks=1 00:13:33.558 --rc geninfo_unexecuted_blocks=1 00:13:33.558 00:13:33.558 ' 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:33.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.558 --rc genhtml_branch_coverage=1 00:13:33.558 --rc genhtml_function_coverage=1 00:13:33.558 --rc genhtml_legend=1 00:13:33.558 --rc geninfo_all_blocks=1 00:13:33.558 --rc geninfo_unexecuted_blocks=1 00:13:33.558 00:13:33.558 ' 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:33.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.558 --rc genhtml_branch_coverage=1 00:13:33.558 --rc genhtml_function_coverage=1 00:13:33.558 --rc genhtml_legend=1 00:13:33.558 --rc geninfo_all_blocks=1 00:13:33.558 --rc geninfo_unexecuted_blocks=1 00:13:33.558 00:13:33.558 ' 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:33.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.558 --rc genhtml_branch_coverage=1 00:13:33.558 --rc genhtml_function_coverage=1 00:13:33.558 --rc genhtml_legend=1 00:13:33.558 --rc geninfo_all_blocks=1 00:13:33.558 --rc geninfo_unexecuted_blocks=1 00:13:33.558 00:13:33.558 ' 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.558 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:33.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:33.559 06:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:41.701 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:41.701 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:41.701 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:41.701 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.701 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:41.702 06:56:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:41.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:13:41.702 00:13:41.702 --- 10.0.0.2 ping statistics --- 00:13:41.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.702 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:13:41.702 00:13:41.702 --- 10.0.0.1 ping statistics --- 00:13:41.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.702 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=3057328 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 3057328 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3057328 ']' 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:41.702 06:56:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.702 [2024-10-16 06:56:40.420707] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:13:41.702 [2024-10-16 06:56:40.420774] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.702 [2024-10-16 06:56:40.510536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.702 [2024-10-16 06:56:40.562001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.702 [2024-10-16 06:56:40.562055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.702 [2024-10-16 06:56:40.562064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.702 [2024-10-16 06:56:40.562072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.702 [2024-10-16 06:56:40.562078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.702 [2024-10-16 06:56:40.564150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.702 [2024-10-16 06:56:40.564312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.702 [2024-10-16 06:56:40.564312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.964 [2024-10-16 06:56:41.294749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.964 [2024-10-16 06:56:41.320526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.964 NULL1 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3057420 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.964 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.965 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.538 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.538 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:42.538 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.538 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.538 06:56:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.798 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.799 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:42.799 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.799 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.799 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.060 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.060 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:43.060 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.060 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.060 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.320 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.320 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:43.320 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.320 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.320 06:56:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.582 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.582 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:43.582 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.582 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.582 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.154 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.154 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:44.154 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.154 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.154 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.414 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.414 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:44.414 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.414 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.414 06:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.674 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.674 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:44.674 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.674 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.674 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.935 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.935 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:44.935 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.935 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.935 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.507 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.507 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:45.507 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.507 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.507 06:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.768 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.768 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:45.768 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.768 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.768 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.029 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.029 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:46.029 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.029 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.029 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.290 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.290 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:46.290 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.290 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.291 06:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.551 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.551 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:46.551 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.551 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.551 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.123 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.123 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:47.123 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.123 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.123 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.383 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.383 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:47.383 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.383 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.383 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.645 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.645 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:47.645 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.645 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.645 06:56:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.905 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.905 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:47.905 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.905 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.905 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.166 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.166 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:48.166 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.166 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.166 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.737 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.737 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:48.737 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.737 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.737 06:56:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.999 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.999 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:48.999 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.999 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.999 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.259 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.259 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:49.259 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.259 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.259 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.520 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.521 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:49.521 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.521 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.521 06:56:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.781 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.782 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:49.782 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.782 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.782 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.352 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.352 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:50.352 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.352 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.352 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.613 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.613 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:50.613 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.613 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.613 06:56:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.874 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.874 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:50.874 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.874 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.874 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.135 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.135 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:51.135 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.135 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.135 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.396 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.396 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:51.396 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.396 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.396 06:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.968 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.968 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:51.968 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.968 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.968 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.968 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3057420 00:13:52.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3057420) - No such process 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3057420 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.229 rmmod nvme_tcp 00:13:52.229 rmmod nvme_fabrics 00:13:52.229 rmmod nvme_keyring 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 3057328 ']' 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 3057328 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3057328 ']' 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3057328 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3057328 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3057328' 00:13:52.229 killing process with pid 3057328 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3057328 00:13:52.229 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3057328 00:13:52.490 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:52.490 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.491 06:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.456 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.456 00:13:54.456 real 0m21.244s 00:13:54.456 user 0m42.229s 00:13:54.456 sys 0m9.268s 00:13:54.456 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.456 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.456 ************************************ 00:13:54.456 END TEST nvmf_connect_stress 00:13:54.456 ************************************ 00:13:54.456 06:56:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:54.456 06:56:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:54.456 06:56:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.456 06:56:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.756 ************************************ 00:13:54.756 START TEST nvmf_fused_ordering 00:13:54.756 ************************************ 00:13:54.756 06:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:54.756 * Looking for test storage... 00:13:54.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:54.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.756 --rc genhtml_branch_coverage=1 00:13:54.756 --rc genhtml_function_coverage=1 00:13:54.756 --rc genhtml_legend=1 00:13:54.756 --rc geninfo_all_blocks=1 00:13:54.756 --rc geninfo_unexecuted_blocks=1 00:13:54.756 00:13:54.756 ' 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:54.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.756 --rc genhtml_branch_coverage=1 00:13:54.756 --rc genhtml_function_coverage=1 00:13:54.756 --rc genhtml_legend=1 00:13:54.756 --rc geninfo_all_blocks=1 00:13:54.756 --rc geninfo_unexecuted_blocks=1 00:13:54.756 00:13:54.756 ' 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:54.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.756 --rc genhtml_branch_coverage=1 00:13:54.756 --rc genhtml_function_coverage=1 00:13:54.756 --rc genhtml_legend=1 00:13:54.756 --rc geninfo_all_blocks=1 00:13:54.756 --rc geninfo_unexecuted_blocks=1 00:13:54.756 00:13:54.756 ' 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:54.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.756 --rc genhtml_branch_coverage=1 00:13:54.756 --rc genhtml_function_coverage=1 00:13:54.756 --rc genhtml_legend=1 00:13:54.756 --rc geninfo_all_blocks=1 00:13:54.756 --rc geninfo_unexecuted_blocks=1 00:13:54.756 00:13:54.756 ' 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.756 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.757 06:56:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:03.028 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:03.029 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:03.029 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:03.029 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:03.029 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:03.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:14:03.029 00:14:03.029 --- 10.0.0.2 ping statistics --- 00:14:03.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.029 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:14:03.029 00:14:03.029 --- 10.0.0.1 ping statistics --- 00:14:03.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.029 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=3063810 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 3063810 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3063810 ']' 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.029 06:57:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.029 [2024-10-16 06:57:01.680502] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:14:03.029 [2024-10-16 06:57:01.680568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.030 [2024-10-16 06:57:01.771195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.030 [2024-10-16 06:57:01.822257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.030 [2024-10-16 06:57:01.822312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.030 [2024-10-16 06:57:01.822322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.030 [2024-10-16 06:57:01.822329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.030 [2024-10-16 06:57:01.822335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.030 [2024-10-16 06:57:01.823117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.030 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.030 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:03.030 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:03.030 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.030 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.291 [2024-10-16 06:57:02.565279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.291 [2024-10-16 06:57:02.589618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.291 NULL1 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.291 06:57:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:03.291 [2024-10-16 06:57:02.661029] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:14:03.291 [2024-10-16 06:57:02.661098] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063975 ] 00:14:03.863 Attached to nqn.2016-06.io.spdk:cnode1 00:14:03.863 Namespace ID: 1 size: 1GB 00:14:03.863 fused_ordering(0) 00:14:03.863 fused_ordering(1) 00:14:03.863 fused_ordering(2) 00:14:03.863 fused_ordering(3) 00:14:03.863 fused_ordering(4) 00:14:03.863 fused_ordering(5) 00:14:03.863 fused_ordering(6) 00:14:03.863 fused_ordering(7) 00:14:03.863 fused_ordering(8) 00:14:03.863 fused_ordering(9) 00:14:03.863 fused_ordering(10) 00:14:03.863 fused_ordering(11) 00:14:03.863 fused_ordering(12) 00:14:03.863 fused_ordering(13) 00:14:03.863 fused_ordering(14) 00:14:03.863 fused_ordering(15) 00:14:03.863 fused_ordering(16) 00:14:03.863 fused_ordering(17) 00:14:03.863 fused_ordering(18) 00:14:03.863 fused_ordering(19) 00:14:03.863 fused_ordering(20) 00:14:03.863 fused_ordering(21) 00:14:03.863 fused_ordering(22) 00:14:03.863 fused_ordering(23) 00:14:03.863 fused_ordering(24) 00:14:03.863 fused_ordering(25) 00:14:03.863 fused_ordering(26) 00:14:03.863 fused_ordering(27) 00:14:03.863 fused_ordering(28) 00:14:03.863 fused_ordering(29) 00:14:03.863 fused_ordering(30) 00:14:03.863 fused_ordering(31) 00:14:03.863 fused_ordering(32) 00:14:03.863 fused_ordering(33) 00:14:03.863 fused_ordering(34) 00:14:03.863 fused_ordering(35) 00:14:03.863 fused_ordering(36) 00:14:03.863 fused_ordering(37) 00:14:03.863 fused_ordering(38) 00:14:03.863 fused_ordering(39) 00:14:03.863 fused_ordering(40) 00:14:03.863 fused_ordering(41) 00:14:03.863 fused_ordering(42) 00:14:03.863 fused_ordering(43) 00:14:03.863 fused_ordering(44) 00:14:03.863 fused_ordering(45) 00:14:03.863 fused_ordering(46) 00:14:03.863 fused_ordering(47) 00:14:03.863 fused_ordering(48) 00:14:03.863 fused_ordering(49) 00:14:03.863 fused_ordering(50) 00:14:03.863 fused_ordering(51) 00:14:03.863 fused_ordering(52) 00:14:03.863 fused_ordering(53) 00:14:03.863 fused_ordering(54) 00:14:03.863 fused_ordering(55) 00:14:03.863 fused_ordering(56) 00:14:03.863 fused_ordering(57) 00:14:03.863 fused_ordering(58) 00:14:03.863 fused_ordering(59) 00:14:03.863 fused_ordering(60) 00:14:03.863 fused_ordering(61) 00:14:03.863 fused_ordering(62) 00:14:03.863 fused_ordering(63) 00:14:03.863 fused_ordering(64) 00:14:03.863 fused_ordering(65) 00:14:03.863 fused_ordering(66) 00:14:03.863 fused_ordering(67) 00:14:03.863 fused_ordering(68) 00:14:03.863 fused_ordering(69) 00:14:03.863 fused_ordering(70) 00:14:03.863 fused_ordering(71) 00:14:03.863 fused_ordering(72) 00:14:03.863 fused_ordering(73) 00:14:03.863 fused_ordering(74) 00:14:03.863 fused_ordering(75) 00:14:03.863 fused_ordering(76) 00:14:03.863 fused_ordering(77) 00:14:03.863 fused_ordering(78) 00:14:03.863 fused_ordering(79) 00:14:03.863 fused_ordering(80) 00:14:03.863 fused_ordering(81) 00:14:03.863 fused_ordering(82) 00:14:03.863 fused_ordering(83) 00:14:03.863 fused_ordering(84) 00:14:03.863 fused_ordering(85) 00:14:03.863 fused_ordering(86) 00:14:03.863 fused_ordering(87) 00:14:03.863 fused_ordering(88) 00:14:03.863 fused_ordering(89) 00:14:03.863 fused_ordering(90) 00:14:03.863 fused_ordering(91) 00:14:03.863 fused_ordering(92) 00:14:03.863 fused_ordering(93) 00:14:03.863 fused_ordering(94) 00:14:03.863 fused_ordering(95) 00:14:03.863 fused_ordering(96) 00:14:03.863 fused_ordering(97) 00:14:03.863 fused_ordering(98) 00:14:03.863 fused_ordering(99) 00:14:03.863 fused_ordering(100) 00:14:03.863 fused_ordering(101) 00:14:03.863 fused_ordering(102) 00:14:03.863 fused_ordering(103) 00:14:03.863 fused_ordering(104) 00:14:03.863 fused_ordering(105) 00:14:03.863 fused_ordering(106) 00:14:03.863 fused_ordering(107) 00:14:03.863 fused_ordering(108) 00:14:03.863 fused_ordering(109) 00:14:03.863 fused_ordering(110) 00:14:03.863 fused_ordering(111) 00:14:03.863 fused_ordering(112) 00:14:03.863 fused_ordering(113) 00:14:03.863 fused_ordering(114) 00:14:03.863 fused_ordering(115) 00:14:03.863 fused_ordering(116) 00:14:03.863 fused_ordering(117) 00:14:03.863 fused_ordering(118) 00:14:03.863 fused_ordering(119) 00:14:03.863 fused_ordering(120) 00:14:03.863 fused_ordering(121) 00:14:03.863 fused_ordering(122) 00:14:03.863 fused_ordering(123) 00:14:03.863 fused_ordering(124) 00:14:03.863 fused_ordering(125) 00:14:03.863 fused_ordering(126) 00:14:03.863 fused_ordering(127) 00:14:03.863 fused_ordering(128) 00:14:03.863 fused_ordering(129) 00:14:03.863 fused_ordering(130) 00:14:03.863 fused_ordering(131) 00:14:03.863 fused_ordering(132) 00:14:03.863 fused_ordering(133) 00:14:03.863 fused_ordering(134) 00:14:03.863 fused_ordering(135) 00:14:03.863 fused_ordering(136) 00:14:03.863 fused_ordering(137) 00:14:03.863 fused_ordering(138) 00:14:03.863 fused_ordering(139) 00:14:03.863 fused_ordering(140) 00:14:03.863 fused_ordering(141) 00:14:03.863 fused_ordering(142) 00:14:03.863 fused_ordering(143) 00:14:03.863 fused_ordering(144) 00:14:03.863 fused_ordering(145) 00:14:03.863 fused_ordering(146) 00:14:03.863 fused_ordering(147) 00:14:03.863 fused_ordering(148) 00:14:03.863 fused_ordering(149) 00:14:03.863 fused_ordering(150) 00:14:03.863 fused_ordering(151) 00:14:03.863 fused_ordering(152) 00:14:03.863 fused_ordering(153) 00:14:03.863 fused_ordering(154) 00:14:03.863 fused_ordering(155) 00:14:03.863 fused_ordering(156) 00:14:03.863 fused_ordering(157) 00:14:03.863 fused_ordering(158) 00:14:03.863 fused_ordering(159) 00:14:03.863 fused_ordering(160) 00:14:03.863 fused_ordering(161) 00:14:03.863 fused_ordering(162) 00:14:03.863 fused_ordering(163) 00:14:03.863 fused_ordering(164) 00:14:03.863 fused_ordering(165) 00:14:03.863 fused_ordering(166) 00:14:03.863 fused_ordering(167) 00:14:03.863 fused_ordering(168) 00:14:03.863 fused_ordering(169) 00:14:03.863 fused_ordering(170) 00:14:03.863 fused_ordering(171) 00:14:03.863 fused_ordering(172) 00:14:03.863 fused_ordering(173) 00:14:03.863 fused_ordering(174) 00:14:03.863 fused_ordering(175) 00:14:03.863 fused_ordering(176) 00:14:03.863 fused_ordering(177) 00:14:03.863 fused_ordering(178) 00:14:03.863 fused_ordering(179) 00:14:03.863 fused_ordering(180) 00:14:03.863 fused_ordering(181) 00:14:03.863 fused_ordering(182) 00:14:03.863 fused_ordering(183) 00:14:03.863 fused_ordering(184) 00:14:03.863 fused_ordering(185) 00:14:03.863 fused_ordering(186) 00:14:03.864 fused_ordering(187) 00:14:03.864 fused_ordering(188) 00:14:03.864 fused_ordering(189) 00:14:03.864 fused_ordering(190) 00:14:03.864 fused_ordering(191) 00:14:03.864 fused_ordering(192) 00:14:03.864 fused_ordering(193) 00:14:03.864 fused_ordering(194) 00:14:03.864 fused_ordering(195) 00:14:03.864 fused_ordering(196) 00:14:03.864 fused_ordering(197) 00:14:03.864 fused_ordering(198) 00:14:03.864 fused_ordering(199) 00:14:03.864 fused_ordering(200) 00:14:03.864 fused_ordering(201) 00:14:03.864 fused_ordering(202) 00:14:03.864 fused_ordering(203) 00:14:03.864 fused_ordering(204) 00:14:03.864 fused_ordering(205) 00:14:04.124 fused_ordering(206) 00:14:04.124 fused_ordering(207) 00:14:04.124 fused_ordering(208) 00:14:04.124 fused_ordering(209) 00:14:04.124 fused_ordering(210) 00:14:04.124 fused_ordering(211) 00:14:04.124 fused_ordering(212) 00:14:04.124 fused_ordering(213) 00:14:04.124 fused_ordering(214) 00:14:04.124 fused_ordering(215) 00:14:04.124 fused_ordering(216) 00:14:04.124 fused_ordering(217) 00:14:04.124 fused_ordering(218) 00:14:04.124 fused_ordering(219) 00:14:04.124 fused_ordering(220) 00:14:04.124 fused_ordering(221) 00:14:04.124 fused_ordering(222) 00:14:04.124 fused_ordering(223) 00:14:04.124 fused_ordering(224) 00:14:04.124 fused_ordering(225) 00:14:04.124 fused_ordering(226) 00:14:04.124 fused_ordering(227) 00:14:04.124 fused_ordering(228) 00:14:04.124 fused_ordering(229) 00:14:04.124 fused_ordering(230) 00:14:04.124 fused_ordering(231) 00:14:04.124 fused_ordering(232) 00:14:04.124 fused_ordering(233) 00:14:04.124 fused_ordering(234) 00:14:04.124 fused_ordering(235) 00:14:04.124 fused_ordering(236) 00:14:04.124 fused_ordering(237) 00:14:04.124 fused_ordering(238) 00:14:04.124 fused_ordering(239) 00:14:04.124 fused_ordering(240) 00:14:04.124 fused_ordering(241) 00:14:04.124 fused_ordering(242) 00:14:04.124 fused_ordering(243) 00:14:04.124 fused_ordering(244) 00:14:04.124 fused_ordering(245) 00:14:04.124 fused_ordering(246) 00:14:04.124 fused_ordering(247) 00:14:04.125 fused_ordering(248) 00:14:04.125 fused_ordering(249) 00:14:04.125 fused_ordering(250) 00:14:04.125 fused_ordering(251) 00:14:04.125 fused_ordering(252) 00:14:04.125 fused_ordering(253) 00:14:04.125 fused_ordering(254) 00:14:04.125 fused_ordering(255) 00:14:04.125 fused_ordering(256) 00:14:04.125 fused_ordering(257) 00:14:04.125 fused_ordering(258) 00:14:04.125 fused_ordering(259) 00:14:04.125 fused_ordering(260) 00:14:04.125 fused_ordering(261) 00:14:04.125 fused_ordering(262) 00:14:04.125 fused_ordering(263) 00:14:04.125 fused_ordering(264) 00:14:04.125 fused_ordering(265) 00:14:04.125 fused_ordering(266) 00:14:04.125 fused_ordering(267) 00:14:04.125 fused_ordering(268) 00:14:04.125 fused_ordering(269) 00:14:04.125 fused_ordering(270) 00:14:04.125 fused_ordering(271) 00:14:04.125 fused_ordering(272) 00:14:04.125 fused_ordering(273) 00:14:04.125 fused_ordering(274) 00:14:04.125 fused_ordering(275) 00:14:04.125 fused_ordering(276) 00:14:04.125 fused_ordering(277) 00:14:04.125 fused_ordering(278) 00:14:04.125 fused_ordering(279) 00:14:04.125 fused_ordering(280) 00:14:04.125 fused_ordering(281) 00:14:04.125 fused_ordering(282) 00:14:04.125 fused_ordering(283) 00:14:04.125 fused_ordering(284) 00:14:04.125 fused_ordering(285) 00:14:04.125 fused_ordering(286) 00:14:04.125 fused_ordering(287) 00:14:04.125 fused_ordering(288) 00:14:04.125 fused_ordering(289) 00:14:04.125 fused_ordering(290) 00:14:04.125 fused_ordering(291) 00:14:04.125 fused_ordering(292) 00:14:04.125 fused_ordering(293) 00:14:04.125 fused_ordering(294) 00:14:04.125 fused_ordering(295) 00:14:04.125 fused_ordering(296) 00:14:04.125 fused_ordering(297) 00:14:04.125 fused_ordering(298) 00:14:04.125 fused_ordering(299) 00:14:04.125 fused_ordering(300) 00:14:04.125 fused_ordering(301) 00:14:04.125 fused_ordering(302) 00:14:04.125 fused_ordering(303) 00:14:04.125 fused_ordering(304) 00:14:04.125 fused_ordering(305) 00:14:04.125 fused_ordering(306) 00:14:04.125 fused_ordering(307) 00:14:04.125 fused_ordering(308) 00:14:04.125 fused_ordering(309) 00:14:04.125 fused_ordering(310) 00:14:04.125 fused_ordering(311) 00:14:04.125 fused_ordering(312) 00:14:04.125 fused_ordering(313) 00:14:04.125 fused_ordering(314) 00:14:04.125 fused_ordering(315) 00:14:04.125 fused_ordering(316) 00:14:04.125 fused_ordering(317) 00:14:04.125 fused_ordering(318) 00:14:04.125 fused_ordering(319) 00:14:04.125 fused_ordering(320) 00:14:04.125 fused_ordering(321) 00:14:04.125 fused_ordering(322) 00:14:04.125 fused_ordering(323) 00:14:04.125 fused_ordering(324) 00:14:04.125 fused_ordering(325) 00:14:04.125 fused_ordering(326) 00:14:04.125 fused_ordering(327) 00:14:04.125 fused_ordering(328) 00:14:04.125 fused_ordering(329) 00:14:04.125 fused_ordering(330) 00:14:04.125 fused_ordering(331) 00:14:04.125 fused_ordering(332) 00:14:04.125 fused_ordering(333) 00:14:04.125 fused_ordering(334) 00:14:04.125 fused_ordering(335) 00:14:04.125 fused_ordering(336) 00:14:04.125 fused_ordering(337) 00:14:04.125 fused_ordering(338) 00:14:04.125 fused_ordering(339) 00:14:04.125 fused_ordering(340) 00:14:04.125 fused_ordering(341) 00:14:04.125 fused_ordering(342) 00:14:04.125 fused_ordering(343) 00:14:04.125 fused_ordering(344) 00:14:04.125 fused_ordering(345) 00:14:04.125 fused_ordering(346) 00:14:04.125 fused_ordering(347) 00:14:04.125 fused_ordering(348) 00:14:04.125 fused_ordering(349) 00:14:04.125 fused_ordering(350) 00:14:04.125 fused_ordering(351) 00:14:04.125 fused_ordering(352) 00:14:04.125 fused_ordering(353) 00:14:04.125 fused_ordering(354) 00:14:04.125 fused_ordering(355) 00:14:04.125 fused_ordering(356) 00:14:04.125 fused_ordering(357) 00:14:04.125 fused_ordering(358) 00:14:04.125 fused_ordering(359) 00:14:04.125 fused_ordering(360) 00:14:04.125 fused_ordering(361) 00:14:04.125 fused_ordering(362) 00:14:04.125 fused_ordering(363) 00:14:04.125 fused_ordering(364) 00:14:04.125 fused_ordering(365) 00:14:04.125 fused_ordering(366) 00:14:04.125 fused_ordering(367) 00:14:04.125 fused_ordering(368) 00:14:04.125 fused_ordering(369) 00:14:04.125 fused_ordering(370) 00:14:04.125 fused_ordering(371) 00:14:04.125 fused_ordering(372) 00:14:04.125 fused_ordering(373) 00:14:04.125 fused_ordering(374) 00:14:04.125 fused_ordering(375) 00:14:04.125 fused_ordering(376) 00:14:04.125 fused_ordering(377) 00:14:04.125 fused_ordering(378) 00:14:04.125 fused_ordering(379) 00:14:04.125 fused_ordering(380) 00:14:04.125 fused_ordering(381) 00:14:04.125 fused_ordering(382) 00:14:04.125 fused_ordering(383) 00:14:04.125 fused_ordering(384) 00:14:04.125 fused_ordering(385) 00:14:04.125 fused_ordering(386) 00:14:04.125 fused_ordering(387) 00:14:04.125 fused_ordering(388) 00:14:04.125 fused_ordering(389) 00:14:04.125 fused_ordering(390) 00:14:04.125 fused_ordering(391) 00:14:04.125 fused_ordering(392) 00:14:04.125 fused_ordering(393) 00:14:04.125 fused_ordering(394) 00:14:04.125 fused_ordering(395) 00:14:04.125 fused_ordering(396) 00:14:04.125 fused_ordering(397) 00:14:04.125 fused_ordering(398) 00:14:04.125 fused_ordering(399) 00:14:04.125 fused_ordering(400) 00:14:04.125 fused_ordering(401) 00:14:04.125 fused_ordering(402) 00:14:04.125 fused_ordering(403) 00:14:04.125 fused_ordering(404) 00:14:04.125 fused_ordering(405) 00:14:04.125 fused_ordering(406) 00:14:04.125 fused_ordering(407) 00:14:04.125 fused_ordering(408) 00:14:04.125 fused_ordering(409) 00:14:04.125 fused_ordering(410) 00:14:04.696 fused_ordering(411) 00:14:04.696 fused_ordering(412) 00:14:04.696 fused_ordering(413) 00:14:04.696 fused_ordering(414) 00:14:04.696 fused_ordering(415) 00:14:04.696 fused_ordering(416) 00:14:04.696 fused_ordering(417) 00:14:04.696 fused_ordering(418) 00:14:04.696 fused_ordering(419) 00:14:04.696 fused_ordering(420) 00:14:04.696 fused_ordering(421) 00:14:04.696 fused_ordering(422) 00:14:04.696 fused_ordering(423) 00:14:04.696 fused_ordering(424) 00:14:04.696 fused_ordering(425) 00:14:04.696 fused_ordering(426) 00:14:04.696 fused_ordering(427) 00:14:04.696 fused_ordering(428) 00:14:04.696 fused_ordering(429) 00:14:04.696 fused_ordering(430) 00:14:04.696 fused_ordering(431) 00:14:04.696 fused_ordering(432) 00:14:04.696 fused_ordering(433) 00:14:04.696 fused_ordering(434) 00:14:04.696 fused_ordering(435) 00:14:04.696 fused_ordering(436) 00:14:04.696 fused_ordering(437) 00:14:04.696 fused_ordering(438) 00:14:04.696 fused_ordering(439) 00:14:04.696 fused_ordering(440) 00:14:04.696 fused_ordering(441) 00:14:04.696 fused_ordering(442) 00:14:04.696 fused_ordering(443) 00:14:04.696 fused_ordering(444) 00:14:04.696 fused_ordering(445) 00:14:04.696 fused_ordering(446) 00:14:04.696 fused_ordering(447) 00:14:04.696 fused_ordering(448) 00:14:04.696 fused_ordering(449) 00:14:04.696 fused_ordering(450) 00:14:04.696 fused_ordering(451) 00:14:04.696 fused_ordering(452) 00:14:04.696 fused_ordering(453) 00:14:04.696 fused_ordering(454) 00:14:04.696 fused_ordering(455) 00:14:04.696 fused_ordering(456) 00:14:04.696 fused_ordering(457) 00:14:04.696 fused_ordering(458) 00:14:04.696 fused_ordering(459) 00:14:04.696 fused_ordering(460) 00:14:04.696 fused_ordering(461) 00:14:04.696 fused_ordering(462) 00:14:04.696 fused_ordering(463) 00:14:04.696 fused_ordering(464) 00:14:04.696 fused_ordering(465) 00:14:04.696 fused_ordering(466) 00:14:04.696 fused_ordering(467) 00:14:04.696 fused_ordering(468) 00:14:04.696 fused_ordering(469) 00:14:04.696 fused_ordering(470) 00:14:04.696 fused_ordering(471) 00:14:04.696 fused_ordering(472) 00:14:04.696 fused_ordering(473) 00:14:04.696 fused_ordering(474) 00:14:04.696 fused_ordering(475) 00:14:04.696 fused_ordering(476) 00:14:04.696 fused_ordering(477) 00:14:04.696 fused_ordering(478) 00:14:04.696 fused_ordering(479) 00:14:04.696 fused_ordering(480) 00:14:04.696 fused_ordering(481) 00:14:04.696 fused_ordering(482) 00:14:04.696 fused_ordering(483) 00:14:04.696 fused_ordering(484) 00:14:04.696 fused_ordering(485) 00:14:04.696 fused_ordering(486) 00:14:04.696 fused_ordering(487) 00:14:04.696 fused_ordering(488) 00:14:04.696 fused_ordering(489) 00:14:04.696 fused_ordering(490) 00:14:04.696 fused_ordering(491) 00:14:04.696 fused_ordering(492) 00:14:04.696 fused_ordering(493) 00:14:04.696 fused_ordering(494) 00:14:04.696 fused_ordering(495) 00:14:04.696 fused_ordering(496) 00:14:04.696 fused_ordering(497) 00:14:04.696 fused_ordering(498) 00:14:04.696 fused_ordering(499) 00:14:04.696 fused_ordering(500) 00:14:04.696 fused_ordering(501) 00:14:04.696 fused_ordering(502) 00:14:04.696 fused_ordering(503) 00:14:04.696 fused_ordering(504) 00:14:04.696 fused_ordering(505) 00:14:04.696 fused_ordering(506) 00:14:04.696 fused_ordering(507) 00:14:04.696 fused_ordering(508) 00:14:04.696 fused_ordering(509) 00:14:04.696 fused_ordering(510) 00:14:04.696 fused_ordering(511) 00:14:04.696 fused_ordering(512) 00:14:04.696 fused_ordering(513) 00:14:04.696 fused_ordering(514) 00:14:04.696 fused_ordering(515) 00:14:04.696 fused_ordering(516) 00:14:04.696 fused_ordering(517) 00:14:04.696 fused_ordering(518) 00:14:04.696 fused_ordering(519) 00:14:04.696 fused_ordering(520) 00:14:04.696 fused_ordering(521) 00:14:04.696 fused_ordering(522) 00:14:04.696 fused_ordering(523) 00:14:04.696 fused_ordering(524) 00:14:04.696 fused_ordering(525) 00:14:04.696 fused_ordering(526) 00:14:04.696 fused_ordering(527) 00:14:04.696 fused_ordering(528) 00:14:04.696 fused_ordering(529) 00:14:04.696 fused_ordering(530) 00:14:04.696 fused_ordering(531) 00:14:04.696 fused_ordering(532) 00:14:04.696 fused_ordering(533) 00:14:04.696 fused_ordering(534) 00:14:04.696 fused_ordering(535) 00:14:04.696 fused_ordering(536) 00:14:04.696 fused_ordering(537) 00:14:04.696 fused_ordering(538) 00:14:04.696 fused_ordering(539) 00:14:04.696 fused_ordering(540) 00:14:04.696 fused_ordering(541) 00:14:04.696 fused_ordering(542) 00:14:04.696 fused_ordering(543) 00:14:04.696 fused_ordering(544) 00:14:04.696 fused_ordering(545) 00:14:04.696 fused_ordering(546) 00:14:04.696 fused_ordering(547) 00:14:04.696 fused_ordering(548) 00:14:04.696 fused_ordering(549) 00:14:04.696 fused_ordering(550) 00:14:04.696 fused_ordering(551) 00:14:04.696 fused_ordering(552) 00:14:04.696 fused_ordering(553) 00:14:04.696 fused_ordering(554) 00:14:04.696 fused_ordering(555) 00:14:04.696 fused_ordering(556) 00:14:04.696 fused_ordering(557) 00:14:04.696 fused_ordering(558) 00:14:04.696 fused_ordering(559) 00:14:04.696 fused_ordering(560) 00:14:04.696 fused_ordering(561) 00:14:04.696 fused_ordering(562) 00:14:04.696 fused_ordering(563) 00:14:04.696 fused_ordering(564) 00:14:04.696 fused_ordering(565) 00:14:04.696 fused_ordering(566) 00:14:04.696 fused_ordering(567) 00:14:04.696 fused_ordering(568) 00:14:04.696 fused_ordering(569) 00:14:04.696 fused_ordering(570) 00:14:04.696 fused_ordering(571) 00:14:04.696 fused_ordering(572) 00:14:04.696 fused_ordering(573) 00:14:04.696 fused_ordering(574) 00:14:04.696 fused_ordering(575) 00:14:04.697 fused_ordering(576) 00:14:04.697 fused_ordering(577) 00:14:04.697 fused_ordering(578) 00:14:04.697 fused_ordering(579) 00:14:04.697 fused_ordering(580) 00:14:04.697 fused_ordering(581) 00:14:04.697 fused_ordering(582) 00:14:04.697 fused_ordering(583) 00:14:04.697 fused_ordering(584) 00:14:04.697 fused_ordering(585) 00:14:04.697 fused_ordering(586) 00:14:04.697 fused_ordering(587) 00:14:04.697 fused_ordering(588) 00:14:04.697 fused_ordering(589) 00:14:04.697 fused_ordering(590) 00:14:04.697 fused_ordering(591) 00:14:04.697 fused_ordering(592) 00:14:04.697 fused_ordering(593) 00:14:04.697 fused_ordering(594) 00:14:04.697 fused_ordering(595) 00:14:04.697 fused_ordering(596) 00:14:04.697 fused_ordering(597) 00:14:04.697 fused_ordering(598) 00:14:04.697 fused_ordering(599) 00:14:04.697 fused_ordering(600) 00:14:04.697 fused_ordering(601) 00:14:04.697 fused_ordering(602) 00:14:04.697 fused_ordering(603) 00:14:04.697 fused_ordering(604) 00:14:04.697 fused_ordering(605) 00:14:04.697 fused_ordering(606) 00:14:04.697 fused_ordering(607) 00:14:04.697 fused_ordering(608) 00:14:04.697 fused_ordering(609) 00:14:04.697 fused_ordering(610) 00:14:04.697 fused_ordering(611) 00:14:04.697 fused_ordering(612) 00:14:04.697 fused_ordering(613) 00:14:04.697 fused_ordering(614) 00:14:04.697 fused_ordering(615) 00:14:04.958 fused_ordering(616) 00:14:04.958 fused_ordering(617) 00:14:04.958 fused_ordering(618) 00:14:04.958 fused_ordering(619) 00:14:04.958 fused_ordering(620) 00:14:04.958 fused_ordering(621) 00:14:04.958 fused_ordering(622) 00:14:04.958 fused_ordering(623) 00:14:04.958 fused_ordering(624) 00:14:04.958 fused_ordering(625) 00:14:04.958 fused_ordering(626) 00:14:04.958 fused_ordering(627) 00:14:04.958 fused_ordering(628) 00:14:04.958 fused_ordering(629) 00:14:04.958 fused_ordering(630) 00:14:04.958 fused_ordering(631) 00:14:04.958 fused_ordering(632) 00:14:04.958 fused_ordering(633) 00:14:04.958 fused_ordering(634) 00:14:04.958 fused_ordering(635) 00:14:04.958 fused_ordering(636) 00:14:04.958 fused_ordering(637) 00:14:04.958 fused_ordering(638) 00:14:04.958 fused_ordering(639) 00:14:04.958 fused_ordering(640) 00:14:04.958 fused_ordering(641) 00:14:04.958 fused_ordering(642) 00:14:04.958 fused_ordering(643) 00:14:04.958 fused_ordering(644) 00:14:04.958 fused_ordering(645) 00:14:04.958 fused_ordering(646) 00:14:04.958 fused_ordering(647) 00:14:04.958 fused_ordering(648) 00:14:04.959 fused_ordering(649) 00:14:04.959 fused_ordering(650) 00:14:04.959 fused_ordering(651) 00:14:04.959 fused_ordering(652) 00:14:04.959 fused_ordering(653) 00:14:04.959 fused_ordering(654) 00:14:04.959 fused_ordering(655) 00:14:04.959 fused_ordering(656) 00:14:04.959 fused_ordering(657) 00:14:04.959 fused_ordering(658) 00:14:04.959 fused_ordering(659) 00:14:04.959 fused_ordering(660) 00:14:04.959 fused_ordering(661) 00:14:04.959 fused_ordering(662) 00:14:04.959 fused_ordering(663) 00:14:04.959 fused_ordering(664) 00:14:04.959 fused_ordering(665) 00:14:04.959 fused_ordering(666) 00:14:04.959 fused_ordering(667) 00:14:04.959 fused_ordering(668) 00:14:04.959 fused_ordering(669) 00:14:04.959 fused_ordering(670) 00:14:04.959 fused_ordering(671) 00:14:04.959 fused_ordering(672) 00:14:04.959 fused_ordering(673) 00:14:04.959 fused_ordering(674) 00:14:04.959 fused_ordering(675) 00:14:04.959 fused_ordering(676) 00:14:04.959 fused_ordering(677) 00:14:04.959 fused_ordering(678) 00:14:04.959 fused_ordering(679) 00:14:04.959 fused_ordering(680) 00:14:04.959 fused_ordering(681) 00:14:04.959 fused_ordering(682) 00:14:04.959 fused_ordering(683) 00:14:04.959 fused_ordering(684) 00:14:04.959 fused_ordering(685) 00:14:04.959 fused_ordering(686) 00:14:04.959 fused_ordering(687) 00:14:04.959 fused_ordering(688) 00:14:04.959 fused_ordering(689) 00:14:04.959 fused_ordering(690) 00:14:04.959 fused_ordering(691) 00:14:04.959 fused_ordering(692) 00:14:04.959 fused_ordering(693) 00:14:04.959 fused_ordering(694) 00:14:04.959 fused_ordering(695) 00:14:04.959 fused_ordering(696) 00:14:04.959 fused_ordering(697) 00:14:04.959 fused_ordering(698) 00:14:04.959 fused_ordering(699) 00:14:04.959 fused_ordering(700) 00:14:04.959 fused_ordering(701) 00:14:04.959 fused_ordering(702) 00:14:04.959 fused_ordering(703) 00:14:04.959 fused_ordering(704) 00:14:04.959 fused_ordering(705) 00:14:04.959 fused_ordering(706) 00:14:04.959 fused_ordering(707) 00:14:04.959 fused_ordering(708) 00:14:04.959 fused_ordering(709) 00:14:04.959 fused_ordering(710) 00:14:04.959 fused_ordering(711) 00:14:04.959 fused_ordering(712) 00:14:04.959 fused_ordering(713) 00:14:04.959 fused_ordering(714) 00:14:04.959 fused_ordering(715) 00:14:04.959 fused_ordering(716) 00:14:04.959 fused_ordering(717) 00:14:04.959 fused_ordering(718) 00:14:04.959 fused_ordering(719) 00:14:04.959 fused_ordering(720) 00:14:04.959 fused_ordering(721) 00:14:04.959 fused_ordering(722) 00:14:04.959 fused_ordering(723) 00:14:04.959 fused_ordering(724) 00:14:04.959 fused_ordering(725) 00:14:04.959 fused_ordering(726) 00:14:04.959 fused_ordering(727) 00:14:04.959 fused_ordering(728) 00:14:04.959 fused_ordering(729) 00:14:04.959 fused_ordering(730) 00:14:04.959 fused_ordering(731) 00:14:04.959 fused_ordering(732) 00:14:04.959 fused_ordering(733) 00:14:04.959 fused_ordering(734) 00:14:04.959 fused_ordering(735) 00:14:04.959 fused_ordering(736) 00:14:04.959 fused_ordering(737) 00:14:04.959 fused_ordering(738) 00:14:04.959 fused_ordering(739) 00:14:04.959 fused_ordering(740) 00:14:04.959 fused_ordering(741) 00:14:04.959 fused_ordering(742) 00:14:04.959 fused_ordering(743) 00:14:04.959 fused_ordering(744) 00:14:04.959 fused_ordering(745) 00:14:04.959 fused_ordering(746) 00:14:04.959 fused_ordering(747) 00:14:04.959 fused_ordering(748) 00:14:04.959 fused_ordering(749) 00:14:04.959 fused_ordering(750) 00:14:04.959 fused_ordering(751) 00:14:04.959 fused_ordering(752) 00:14:04.959 fused_ordering(753) 00:14:04.959 fused_ordering(754) 00:14:04.959 fused_ordering(755) 00:14:04.959 fused_ordering(756) 00:14:04.959 fused_ordering(757) 00:14:04.959 fused_ordering(758) 00:14:04.959 fused_ordering(759) 00:14:04.959 fused_ordering(760) 00:14:04.959 fused_ordering(761) 00:14:04.959 fused_ordering(762) 00:14:04.959 fused_ordering(763) 00:14:04.959 fused_ordering(764) 00:14:04.959 fused_ordering(765) 00:14:04.959 fused_ordering(766) 00:14:04.959 fused_ordering(767) 00:14:04.959 fused_ordering(768) 00:14:04.959 fused_ordering(769) 00:14:04.959 fused_ordering(770) 00:14:04.959 fused_ordering(771) 00:14:04.959 fused_ordering(772) 00:14:04.959 fused_ordering(773) 00:14:04.959 fused_ordering(774) 00:14:04.959 fused_ordering(775) 00:14:04.959 fused_ordering(776) 00:14:04.959 fused_ordering(777) 00:14:04.959 fused_ordering(778) 00:14:04.959 fused_ordering(779) 00:14:04.959 fused_ordering(780) 00:14:04.959 fused_ordering(781) 00:14:04.959 fused_ordering(782) 00:14:04.959 fused_ordering(783) 00:14:04.959 fused_ordering(784) 00:14:04.959 fused_ordering(785) 00:14:04.959 fused_ordering(786) 00:14:04.959 fused_ordering(787) 00:14:04.959 fused_ordering(788) 00:14:04.959 fused_ordering(789) 00:14:04.959 fused_ordering(790) 00:14:04.959 fused_ordering(791) 00:14:04.959 fused_ordering(792) 00:14:04.959 fused_ordering(793) 00:14:04.959 fused_ordering(794) 00:14:04.959 fused_ordering(795) 00:14:04.959 fused_ordering(796) 00:14:04.959 fused_ordering(797) 00:14:04.959 fused_ordering(798) 00:14:04.959 fused_ordering(799) 00:14:04.959 fused_ordering(800) 00:14:04.959 fused_ordering(801) 00:14:04.959 fused_ordering(802) 00:14:04.959 fused_ordering(803) 00:14:04.959 fused_ordering(804) 00:14:04.959 fused_ordering(805) 00:14:04.959 fused_ordering(806) 00:14:04.959 fused_ordering(807) 00:14:04.959 fused_ordering(808) 00:14:04.959 fused_ordering(809) 00:14:04.959 fused_ordering(810) 00:14:04.959 fused_ordering(811) 00:14:04.959 fused_ordering(812) 00:14:04.959 fused_ordering(813) 00:14:04.959 fused_ordering(814) 00:14:04.959 fused_ordering(815) 00:14:04.959 fused_ordering(816) 00:14:04.959 fused_ordering(817) 00:14:04.959 fused_ordering(818) 00:14:04.959 fused_ordering(819) 00:14:04.959 fused_ordering(820) 00:14:05.901 fused_ordering(821) 00:14:05.901 fused_ordering(822) 00:14:05.901 fused_ordering(823) 00:14:05.901 fused_ordering(824) 00:14:05.901 fused_ordering(825) 00:14:05.901 fused_ordering(826) 00:14:05.901 fused_ordering(827) 00:14:05.901 fused_ordering(828) 00:14:05.901 fused_ordering(829) 00:14:05.901 fused_ordering(830) 00:14:05.901 fused_ordering(831) 00:14:05.901 fused_ordering(832) 00:14:05.901 fused_ordering(833) 00:14:05.901 fused_ordering(834) 00:14:05.901 fused_ordering(835) 00:14:05.901 fused_ordering(836) 00:14:05.901 fused_ordering(837) 00:14:05.901 fused_ordering(838) 00:14:05.901 fused_ordering(839) 00:14:05.901 fused_ordering(840) 00:14:05.901 fused_ordering(841) 00:14:05.901 fused_ordering(842) 00:14:05.901 fused_ordering(843) 00:14:05.901 fused_ordering(844) 00:14:05.901 fused_ordering(845) 00:14:05.901 fused_ordering(846) 00:14:05.901 fused_ordering(847) 00:14:05.901 fused_ordering(848) 00:14:05.901 fused_ordering(849) 00:14:05.901 fused_ordering(850) 00:14:05.901 fused_ordering(851) 00:14:05.901 fused_ordering(852) 00:14:05.901 fused_ordering(853) 00:14:05.901 fused_ordering(854) 00:14:05.901 fused_ordering(855) 00:14:05.901 fused_ordering(856) 00:14:05.901 fused_ordering(857) 00:14:05.901 fused_ordering(858) 00:14:05.901 fused_ordering(859) 00:14:05.901 fused_ordering(860) 00:14:05.901 fused_ordering(861) 00:14:05.901 fused_ordering(862) 00:14:05.901 fused_ordering(863) 00:14:05.901 fused_ordering(864) 00:14:05.901 fused_ordering(865) 00:14:05.901 fused_ordering(866) 00:14:05.901 fused_ordering(867) 00:14:05.901 fused_ordering(868) 00:14:05.901 fused_ordering(869) 00:14:05.901 fused_ordering(870) 00:14:05.901 fused_ordering(871) 00:14:05.901 fused_ordering(872) 00:14:05.901 fused_ordering(873) 00:14:05.901 fused_ordering(874) 00:14:05.901 fused_ordering(875) 00:14:05.901 fused_ordering(876) 00:14:05.901 fused_ordering(877) 00:14:05.901 fused_ordering(878) 00:14:05.901 fused_ordering(879) 00:14:05.901 fused_ordering(880) 00:14:05.901 fused_ordering(881) 00:14:05.901 fused_ordering(882) 00:14:05.901 fused_ordering(883) 00:14:05.901 fused_ordering(884) 00:14:05.901 fused_ordering(885) 00:14:05.901 fused_ordering(886) 00:14:05.901 fused_ordering(887) 00:14:05.901 fused_ordering(888) 00:14:05.901 fused_ordering(889) 00:14:05.901 fused_ordering(890) 00:14:05.901 fused_ordering(891) 00:14:05.901 fused_ordering(892) 00:14:05.901 fused_ordering(893) 00:14:05.901 fused_ordering(894) 00:14:05.901 fused_ordering(895) 00:14:05.901 fused_ordering(896) 00:14:05.901 fused_ordering(897) 00:14:05.901 fused_ordering(898) 00:14:05.901 fused_ordering(899) 00:14:05.901 fused_ordering(900) 00:14:05.901 fused_ordering(901) 00:14:05.901 fused_ordering(902) 00:14:05.901 fused_ordering(903) 00:14:05.901 fused_ordering(904) 00:14:05.901 fused_ordering(905) 00:14:05.901 fused_ordering(906) 00:14:05.901 fused_ordering(907) 00:14:05.901 fused_ordering(908) 00:14:05.901 fused_ordering(909) 00:14:05.901 fused_ordering(910) 00:14:05.901 fused_ordering(911) 00:14:05.901 fused_ordering(912) 00:14:05.901 fused_ordering(913) 00:14:05.902 fused_ordering(914) 00:14:05.902 fused_ordering(915) 00:14:05.902 fused_ordering(916) 00:14:05.902 fused_ordering(917) 00:14:05.902 fused_ordering(918) 00:14:05.902 fused_ordering(919) 00:14:05.902 fused_ordering(920) 00:14:05.902 fused_ordering(921) 00:14:05.902 fused_ordering(922) 00:14:05.902 fused_ordering(923) 00:14:05.902 fused_ordering(924) 00:14:05.902 fused_ordering(925) 00:14:05.902 fused_ordering(926) 00:14:05.902 fused_ordering(927) 00:14:05.902 fused_ordering(928) 00:14:05.902 fused_ordering(929) 00:14:05.902 fused_ordering(930) 00:14:05.902 fused_ordering(931) 00:14:05.902 fused_ordering(932) 00:14:05.902 fused_ordering(933) 00:14:05.902 fused_ordering(934) 00:14:05.902 fused_ordering(935) 00:14:05.902 fused_ordering(936) 00:14:05.902 fused_ordering(937) 00:14:05.902 fused_ordering(938) 00:14:05.902 fused_ordering(939) 00:14:05.902 fused_ordering(940) 00:14:05.902 fused_ordering(941) 00:14:05.902 fused_ordering(942) 00:14:05.902 fused_ordering(943) 00:14:05.902 fused_ordering(944) 00:14:05.902 fused_ordering(945) 00:14:05.902 fused_ordering(946) 00:14:05.902 fused_ordering(947) 00:14:05.902 fused_ordering(948) 00:14:05.902 fused_ordering(949) 00:14:05.902 fused_ordering(950) 00:14:05.902 fused_ordering(951) 00:14:05.902 fused_ordering(952) 00:14:05.902 fused_ordering(953) 00:14:05.902 fused_ordering(954) 00:14:05.902 fused_ordering(955) 00:14:05.902 fused_ordering(956) 00:14:05.902 fused_ordering(957) 00:14:05.902 fused_ordering(958) 00:14:05.902 fused_ordering(959) 00:14:05.902 fused_ordering(960) 00:14:05.902 fused_ordering(961) 00:14:05.902 fused_ordering(962) 00:14:05.902 fused_ordering(963) 00:14:05.902 fused_ordering(964) 00:14:05.902 fused_ordering(965) 00:14:05.902 fused_ordering(966) 00:14:05.902 fused_ordering(967) 00:14:05.902 fused_ordering(968) 00:14:05.902 fused_ordering(969) 00:14:05.902 fused_ordering(970) 00:14:05.902 fused_ordering(971) 00:14:05.902 fused_ordering(972) 00:14:05.902 fused_ordering(973) 00:14:05.902 fused_ordering(974) 00:14:05.902 fused_ordering(975) 00:14:05.902 fused_ordering(976) 00:14:05.902 fused_ordering(977) 00:14:05.902 fused_ordering(978) 00:14:05.902 fused_ordering(979) 00:14:05.902 fused_ordering(980) 00:14:05.902 fused_ordering(981) 00:14:05.902 fused_ordering(982) 00:14:05.902 fused_ordering(983) 00:14:05.902 fused_ordering(984) 00:14:05.902 fused_ordering(985) 00:14:05.902 fused_ordering(986) 00:14:05.902 fused_ordering(987) 00:14:05.902 fused_ordering(988) 00:14:05.902 fused_ordering(989) 00:14:05.902 fused_ordering(990) 00:14:05.902 fused_ordering(991) 00:14:05.902 fused_ordering(992) 00:14:05.902 fused_ordering(993) 00:14:05.902 fused_ordering(994) 00:14:05.902 fused_ordering(995) 00:14:05.902 fused_ordering(996) 00:14:05.902 fused_ordering(997) 00:14:05.902 fused_ordering(998) 00:14:05.902 fused_ordering(999) 00:14:05.902 fused_ordering(1000) 00:14:05.902 fused_ordering(1001) 00:14:05.902 fused_ordering(1002) 00:14:05.902 fused_ordering(1003) 00:14:05.902 fused_ordering(1004) 00:14:05.902 fused_ordering(1005) 00:14:05.902 fused_ordering(1006) 00:14:05.902 fused_ordering(1007) 00:14:05.902 fused_ordering(1008) 00:14:05.902 fused_ordering(1009) 00:14:05.902 fused_ordering(1010) 00:14:05.902 fused_ordering(1011) 00:14:05.902 fused_ordering(1012) 00:14:05.902 fused_ordering(1013) 00:14:05.902 fused_ordering(1014) 00:14:05.902 fused_ordering(1015) 00:14:05.902 fused_ordering(1016) 00:14:05.902 fused_ordering(1017) 00:14:05.902 fused_ordering(1018) 00:14:05.902 fused_ordering(1019) 00:14:05.902 fused_ordering(1020) 00:14:05.902 fused_ordering(1021) 00:14:05.902 fused_ordering(1022) 00:14:05.902 fused_ordering(1023) 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:05.902 rmmod nvme_tcp 00:14:05.902 rmmod nvme_fabrics 00:14:05.902 rmmod nvme_keyring 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 3063810 ']' 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 3063810 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3063810 ']' 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3063810 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3063810 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3063810' 00:14:05.902 killing process with pid 3063810 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3063810 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3063810 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.902 06:57:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.449 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.449 00:14:08.449 real 0m13.430s 00:14:08.449 user 0m7.082s 00:14:08.449 sys 0m7.237s 00:14:08.449 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.450 ************************************ 00:14:08.450 END TEST nvmf_fused_ordering 00:14:08.450 ************************************ 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.450 ************************************ 00:14:08.450 START TEST nvmf_ns_masking 00:14:08.450 ************************************ 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.450 * Looking for test storage... 00:14:08.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.450 --rc genhtml_branch_coverage=1 00:14:08.450 --rc genhtml_function_coverage=1 00:14:08.450 --rc genhtml_legend=1 00:14:08.450 --rc geninfo_all_blocks=1 00:14:08.450 --rc geninfo_unexecuted_blocks=1 00:14:08.450 00:14:08.450 ' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.450 --rc genhtml_branch_coverage=1 00:14:08.450 --rc genhtml_function_coverage=1 00:14:08.450 --rc genhtml_legend=1 00:14:08.450 --rc geninfo_all_blocks=1 00:14:08.450 --rc geninfo_unexecuted_blocks=1 00:14:08.450 00:14:08.450 ' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.450 --rc genhtml_branch_coverage=1 00:14:08.450 --rc genhtml_function_coverage=1 00:14:08.450 --rc genhtml_legend=1 00:14:08.450 --rc geninfo_all_blocks=1 00:14:08.450 --rc geninfo_unexecuted_blocks=1 00:14:08.450 00:14:08.450 ' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.450 --rc genhtml_branch_coverage=1 00:14:08.450 --rc genhtml_function_coverage=1 00:14:08.450 --rc genhtml_legend=1 00:14:08.450 --rc geninfo_all_blocks=1 00:14:08.450 --rc geninfo_unexecuted_blocks=1 00:14:08.450 00:14:08.450 ' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.450 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=cf387aa7-aa07-42d0-bf6d-3944920aec32 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d05f9ee8-ba55-477d-953d-10ba54d6e7b2 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ca2b490b-c279-413e-b6df-7a4190a6c3e5 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.451 06:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:16.590 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:16.591 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:16.591 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:16.591 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:16.591 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.591 06:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:16.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:14:16.591 00:14:16.591 --- 10.0.0.2 ping statistics --- 00:14:16.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.591 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:14:16.591 00:14:16.591 --- 10.0.0.1 ping statistics --- 00:14:16.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.591 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=3069147 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 3069147 00:14:16.591 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3069147 ']' 00:14:16.592 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.592 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.592 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.592 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.592 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.592 06:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:16.592 [2024-10-16 06:57:15.284981] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:14:16.592 [2024-10-16 06:57:15.285047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.592 [2024-10-16 06:57:15.373649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.592 [2024-10-16 06:57:15.424507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.592 [2024-10-16 06:57:15.424556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.592 [2024-10-16 06:57:15.424564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.592 [2024-10-16 06:57:15.424571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.592 [2024-10-16 06:57:15.424577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.592 [2024-10-16 06:57:15.425338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.852 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.852 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:16.852 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:16.852 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:16.852 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:16.852 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.853 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:16.853 [2024-10-16 06:57:16.315149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.853 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:16.853 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:16.853 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:17.113 Malloc1 00:14:17.113 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:17.373 Malloc2 00:14:17.373 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:17.634 06:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:17.896 06:57:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.896 [2024-10-16 06:57:17.342832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.896 06:57:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:17.896 06:57:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ca2b490b-c279-413e-b6df-7a4190a6c3e5 -a 10.0.0.2 -s 4420 -i 4 00:14:18.156 06:57:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.156 06:57:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:18.156 06:57:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.156 06:57:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:18.156 06:57:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.702 [ 0]:0x1 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=835072e2b77b4fc6990cedd417147796 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 835072e2b77b4fc6990cedd417147796 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.702 [ 0]:0x1 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=835072e2b77b4fc6990cedd417147796 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 835072e2b77b4fc6990cedd417147796 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.702 [ 1]:0x2 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.702 06:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.702 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1285430453884cf49f4f7a344c882c0b 00:14:20.702 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1285430453884cf49f4f7a344c882c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.702 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:20.702 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.702 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.962 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:21.223 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:21.223 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ca2b490b-c279-413e-b6df-7a4190a6c3e5 -a 10.0.0.2 -s 4420 -i 4 00:14:21.223 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:21.223 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:21.223 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.223 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:21.223 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:21.223 06:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:23.136 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:23.136 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:23.136 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.136 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:23.136 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.136 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:23.136 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:23.136 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.397 [ 0]:0x2 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.397 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.658 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1285430453884cf49f4f7a344c882c0b 00:14:23.658 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1285430453884cf49f4f7a344c882c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.658 06:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.658 [ 0]:0x1 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=835072e2b77b4fc6990cedd417147796 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 835072e2b77b4fc6990cedd417147796 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.658 [ 1]:0x2 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1285430453884cf49f4f7a344c882c0b 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1285430453884cf49f4f7a344c882c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.658 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.918 [ 0]:0x2 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.918 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.178 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1285430453884cf49f4f7a344c882c0b 00:14:24.178 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1285430453884cf49f4f7a344c882c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.179 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:24.179 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.179 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.179 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:24.179 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ca2b490b-c279-413e-b6df-7a4190a6c3e5 -a 10.0.0.2 -s 4420 -i 4 00:14:24.439 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:24.439 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:24.439 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.439 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:24.439 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:24.439 06:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.983 [ 0]:0x1 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=835072e2b77b4fc6990cedd417147796 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 835072e2b77b4fc6990cedd417147796 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.983 06:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.983 [ 1]:0x2 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1285430453884cf49f4f7a344c882c0b 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1285430453884cf49f4f7a344c882c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.983 [ 0]:0x2 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1285430453884cf49f4f7a344c882c0b 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1285430453884cf49f4f7a344c882c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.983 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.984 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.984 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.984 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:27.244 [2024-10-16 06:57:26.512027] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:27.244 request: 00:14:27.244 { 00:14:27.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.244 "nsid": 2, 00:14:27.244 "host": "nqn.2016-06.io.spdk:host1", 00:14:27.244 "method": "nvmf_ns_remove_host", 00:14:27.244 "req_id": 1 00:14:27.244 } 00:14:27.244 Got JSON-RPC error response 00:14:27.244 response: 00:14:27.244 { 00:14:27.244 "code": -32602, 00:14:27.244 "message": "Invalid parameters" 00:14:27.244 } 00:14:27.244 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:27.244 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.244 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.244 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.244 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:27.244 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:27.244 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:27.244 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:27.244 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.245 [ 0]:0x2 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1285430453884cf49f4f7a344c882c0b 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1285430453884cf49f4f7a344c882c0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3071485 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3071485 /var/tmp/host.sock 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3071485 ']' 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:27.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.245 06:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.505 [2024-10-16 06:57:26.790242] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:14:27.505 [2024-10-16 06:57:26.790296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071485 ] 00:14:27.505 [2024-10-16 06:57:26.867435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.505 [2024-10-16 06:57:26.902784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.076 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.076 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:28.076 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.336 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.596 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid cf387aa7-aa07-42d0-bf6d-3944920aec32 00:14:28.596 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:28.596 06:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CF387AA7AA0742D0BF6D3944920AEC32 -i 00:14:28.857 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d05f9ee8-ba55-477d-953d-10ba54d6e7b2 00:14:28.857 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:28.857 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D05F9EE8BA55477D953D10BA54D6E7B2 -i 00:14:28.857 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.118 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:29.378 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:29.378 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:29.639 nvme0n1 00:14:29.639 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:29.639 06:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:29.899 nvme1n2 00:14:29.899 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:29.899 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:29.899 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:29.899 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:29.899 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:30.160 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:30.160 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:30.160 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:30.160 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:30.420 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ cf387aa7-aa07-42d0-bf6d-3944920aec32 == \c\f\3\8\7\a\a\7\-\a\a\0\7\-\4\2\d\0\-\b\f\6\d\-\3\9\4\4\9\2\0\a\e\c\3\2 ]] 00:14:30.420 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:30.420 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:30.420 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d05f9ee8-ba55-477d-953d-10ba54d6e7b2 == \d\0\5\f\9\e\e\8\-\b\a\5\5\-\4\7\7\d\-\9\5\3\d\-\1\0\b\a\5\4\d\6\e\7\b\2 ]] 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3071485 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3071485 ']' 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3071485 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3071485 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3071485' 00:14:30.680 killing process with pid 3071485 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3071485 00:14:30.680 06:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3071485 00:14:30.941 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.941 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:30.941 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:30.941 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:30.941 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:30.941 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:30.941 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:30.941 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:30.941 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:30.941 rmmod nvme_tcp 00:14:30.941 rmmod nvme_fabrics 00:14:30.941 rmmod nvme_keyring 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 3069147 ']' 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 3069147 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3069147 ']' 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3069147 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3069147 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3069147' 00:14:31.202 killing process with pid 3069147 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3069147 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3069147 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.202 06:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:33.746 00:14:33.746 real 0m25.267s 00:14:33.746 user 0m25.717s 00:14:33.746 sys 0m8.012s 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:33.746 ************************************ 00:14:33.746 END TEST nvmf_ns_masking 00:14:33.746 ************************************ 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.746 ************************************ 00:14:33.746 START TEST nvmf_nvme_cli 00:14:33.746 ************************************ 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:33.746 * Looking for test storage... 00:14:33.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.746 06:57:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.746 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:33.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.747 --rc genhtml_branch_coverage=1 00:14:33.747 --rc genhtml_function_coverage=1 00:14:33.747 --rc genhtml_legend=1 00:14:33.747 --rc geninfo_all_blocks=1 00:14:33.747 --rc geninfo_unexecuted_blocks=1 00:14:33.747 00:14:33.747 ' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:33.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.747 --rc genhtml_branch_coverage=1 00:14:33.747 --rc genhtml_function_coverage=1 00:14:33.747 --rc genhtml_legend=1 00:14:33.747 --rc geninfo_all_blocks=1 00:14:33.747 --rc geninfo_unexecuted_blocks=1 00:14:33.747 00:14:33.747 ' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:33.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.747 --rc genhtml_branch_coverage=1 00:14:33.747 --rc genhtml_function_coverage=1 00:14:33.747 --rc genhtml_legend=1 00:14:33.747 --rc geninfo_all_blocks=1 00:14:33.747 --rc geninfo_unexecuted_blocks=1 00:14:33.747 00:14:33.747 ' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:33.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.747 --rc genhtml_branch_coverage=1 00:14:33.747 --rc genhtml_function_coverage=1 00:14:33.747 --rc genhtml_legend=1 00:14:33.747 --rc geninfo_all_blocks=1 00:14:33.747 --rc geninfo_unexecuted_blocks=1 00:14:33.747 00:14:33.747 ' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:33.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:33.747 06:57:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:41.946 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:41.946 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:41.946 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:41.946 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:41.946 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:41.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:14:41.947 00:14:41.947 --- 10.0.0.2 ping statistics --- 00:14:41.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.947 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:14:41.947 00:14:41.947 --- 10.0.0.1 ping statistics --- 00:14:41.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.947 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=3076517 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 3076517 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3076517 ']' 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.947 06:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.947 [2024-10-16 06:57:40.627433] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:14:41.947 [2024-10-16 06:57:40.627496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.947 [2024-10-16 06:57:40.717182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.947 [2024-10-16 06:57:40.771457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.947 [2024-10-16 06:57:40.771510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.947 [2024-10-16 06:57:40.771519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.947 [2024-10-16 06:57:40.771526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.947 [2024-10-16 06:57:40.771532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.947 [2024-10-16 06:57:40.773816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.947 [2024-10-16 06:57:40.773985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.947 [2024-10-16 06:57:40.774245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.947 [2024-10-16 06:57:40.774248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.240 [2024-10-16 06:57:41.509724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.240 Malloc0 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.240 Malloc1 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.240 [2024-10-16 06:57:41.624266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.240 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:42.501 00:14:42.501 Discovery Log Number of Records 2, Generation counter 2 00:14:42.501 =====Discovery Log Entry 0====== 00:14:42.501 trtype: tcp 00:14:42.501 adrfam: ipv4 00:14:42.501 subtype: current discovery subsystem 00:14:42.501 treq: not required 00:14:42.501 portid: 0 00:14:42.501 trsvcid: 4420 00:14:42.501 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:42.501 traddr: 10.0.0.2 00:14:42.501 eflags: explicit discovery connections, duplicate discovery information 00:14:42.501 sectype: none 00:14:42.501 =====Discovery Log Entry 1====== 00:14:42.501 trtype: tcp 00:14:42.501 adrfam: ipv4 00:14:42.501 subtype: nvme subsystem 00:14:42.501 treq: not required 00:14:42.501 portid: 0 00:14:42.501 trsvcid: 4420 00:14:42.501 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:42.501 traddr: 10.0.0.2 00:14:42.501 eflags: none 00:14:42.501 sectype: none 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:42.501 06:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:43.885 06:57:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:43.885 06:57:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:43.885 06:57:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.885 06:57:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:43.885 06:57:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:43.885 06:57:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:46.427 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:46.428 /dev/nvme0n2 ]] 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:46.428 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.689 06:57:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.689 rmmod nvme_tcp 00:14:46.689 rmmod nvme_fabrics 00:14:46.689 rmmod nvme_keyring 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 3076517 ']' 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 3076517 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3076517 ']' 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3076517 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076517 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076517' 00:14:46.689 killing process with pid 3076517 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3076517 00:14:46.689 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3076517 00:14:46.949 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:46.949 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:46.949 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:46.950 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:46.950 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:46.950 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:46.950 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:46.950 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:46.950 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:46.950 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.950 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.950 06:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.864 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:48.864 00:14:48.864 real 0m15.532s 00:14:48.864 user 0m24.139s 00:14:48.864 sys 0m6.402s 00:14:48.864 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.864 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.864 ************************************ 00:14:48.864 END TEST nvmf_nvme_cli 00:14:48.864 ************************************ 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:49.131 ************************************ 00:14:49.131 START TEST nvmf_vfio_user 00:14:49.131 ************************************ 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:49.131 * Looking for test storage... 00:14:49.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.131 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.132 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.132 --rc genhtml_branch_coverage=1 00:14:49.132 --rc genhtml_function_coverage=1 00:14:49.132 --rc genhtml_legend=1 00:14:49.132 --rc geninfo_all_blocks=1 00:14:49.132 --rc geninfo_unexecuted_blocks=1 00:14:49.132 00:14:49.132 ' 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:49.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.393 --rc genhtml_branch_coverage=1 00:14:49.393 --rc genhtml_function_coverage=1 00:14:49.393 --rc genhtml_legend=1 00:14:49.393 --rc geninfo_all_blocks=1 00:14:49.393 --rc geninfo_unexecuted_blocks=1 00:14:49.393 00:14:49.393 ' 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:49.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.393 --rc genhtml_branch_coverage=1 00:14:49.393 --rc genhtml_function_coverage=1 00:14:49.393 --rc genhtml_legend=1 00:14:49.393 --rc geninfo_all_blocks=1 00:14:49.393 --rc geninfo_unexecuted_blocks=1 00:14:49.393 00:14:49.393 ' 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:49.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.393 --rc genhtml_branch_coverage=1 00:14:49.393 --rc genhtml_function_coverage=1 00:14:49.393 --rc genhtml_legend=1 00:14:49.393 --rc geninfo_all_blocks=1 00:14:49.393 --rc geninfo_unexecuted_blocks=1 00:14:49.393 00:14:49.393 ' 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.393 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:49.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3078170 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3078170' 00:14:49.394 Process pid: 3078170 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3078170 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3078170 ']' 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.394 06:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:49.394 [2024-10-16 06:57:48.739149] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:14:49.394 [2024-10-16 06:57:48.739222] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.394 [2024-10-16 06:57:48.822174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.394 [2024-10-16 06:57:48.863363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.394 [2024-10-16 06:57:48.863404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.394 [2024-10-16 06:57:48.863410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.394 [2024-10-16 06:57:48.863415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.394 [2024-10-16 06:57:48.863420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.394 [2024-10-16 06:57:48.864956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.394 [2024-10-16 06:57:48.865208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.394 [2024-10-16 06:57:48.865362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.394 [2024-10-16 06:57:48.865362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.335 06:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.335 06:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:50.336 06:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:51.283 06:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:51.283 06:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:51.283 06:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:51.283 06:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:51.283 06:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:51.283 06:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:51.543 Malloc1 00:14:51.544 06:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:51.804 06:57:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:52.064 06:57:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:52.064 06:57:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.064 06:57:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:52.064 06:57:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:52.326 Malloc2 00:14:52.326 06:57:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:52.587 06:57:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:52.587 06:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:52.850 06:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:52.850 06:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:52.850 06:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.851 06:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:52.851 06:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:52.851 06:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:52.851 [2024-10-16 06:57:52.271278] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:14:52.851 [2024-10-16 06:57:52.271348] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3078903 ] 00:14:52.851 [2024-10-16 06:57:52.298546] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:52.851 [2024-10-16 06:57:52.310920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.851 [2024-10-16 06:57:52.310937] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2382c6d000 00:14:52.851 [2024-10-16 06:57:52.311920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.851 [2024-10-16 06:57:52.312916] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.851 [2024-10-16 06:57:52.313922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.851 [2024-10-16 06:57:52.314931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.851 [2024-10-16 06:57:52.315939] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.851 [2024-10-16 06:57:52.316940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.851 [2024-10-16 06:57:52.317949] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.851 [2024-10-16 06:57:52.318950] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.851 [2024-10-16 06:57:52.319964] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.851 [2024-10-16 06:57:52.319975] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2382c62000 00:14:52.851 [2024-10-16 06:57:52.320891] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:52.851 [2024-10-16 06:57:52.334340] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:52.851 [2024-10-16 06:57:52.334365] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:52.851 [2024-10-16 06:57:52.337076] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:52.851 [2024-10-16 06:57:52.337107] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:52.851 [2024-10-16 06:57:52.337170] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:52.851 [2024-10-16 06:57:52.337185] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:52.851 [2024-10-16 06:57:52.337189] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:52.851 [2024-10-16 06:57:52.338078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:52.851 [2024-10-16 06:57:52.338085] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:52.851 [2024-10-16 06:57:52.338090] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:52.851 [2024-10-16 06:57:52.339078] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:52.851 [2024-10-16 06:57:52.339085] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:52.851 [2024-10-16 06:57:52.339091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:52.851 [2024-10-16 06:57:52.340086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:52.851 [2024-10-16 06:57:52.340094] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:52.851 [2024-10-16 06:57:52.341096] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:52.851 [2024-10-16 06:57:52.341102] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:52.851 [2024-10-16 06:57:52.341106] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:52.851 [2024-10-16 06:57:52.341110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:52.851 [2024-10-16 06:57:52.341214] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:52.851 [2024-10-16 06:57:52.341218] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:52.851 [2024-10-16 06:57:52.341222] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:52.851 [2024-10-16 06:57:52.342102] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:52.851 [2024-10-16 06:57:52.343122] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:52.851 [2024-10-16 06:57:52.344114] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:52.851 [2024-10-16 06:57:52.345113] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.851 [2024-10-16 06:57:52.345179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:52.851 [2024-10-16 06:57:52.346123] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:52.851 [2024-10-16 06:57:52.346129] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:52.851 [2024-10-16 06:57:52.346132] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:52.851 [2024-10-16 06:57:52.346148] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:52.851 [2024-10-16 06:57:52.346153] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:52.851 [2024-10-16 06:57:52.346166] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.851 [2024-10-16 06:57:52.346170] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.851 [2024-10-16 06:57:52.346173] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.851 [2024-10-16 06:57:52.346184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.851 [2024-10-16 06:57:52.346224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:52.851 [2024-10-16 06:57:52.346232] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:52.851 [2024-10-16 06:57:52.346236] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:52.851 [2024-10-16 06:57:52.346239] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:52.851 [2024-10-16 06:57:52.346242] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:52.851 [2024-10-16 06:57:52.346245] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:52.851 [2024-10-16 06:57:52.346249] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:52.851 [2024-10-16 06:57:52.346252] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:52.851 [2024-10-16 06:57:52.346258] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:52.851 [2024-10-16 06:57:52.346265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:52.851 [2024-10-16 06:57:52.346281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:52.851 [2024-10-16 06:57:52.346290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.851 [2024-10-16 06:57:52.346297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.851 [2024-10-16 06:57:52.346302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.851 [2024-10-16 06:57:52.346310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.851 [2024-10-16 06:57:52.346313] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:52.851 [2024-10-16 06:57:52.346320] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:52.851 [2024-10-16 06:57:52.346327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:52.851 [2024-10-16 06:57:52.346335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:52.851 [2024-10-16 06:57:52.346339] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:52.851 [2024-10-16 06:57:52.346343] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:52.851 [2024-10-16 06:57:52.346348] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:52.851 [2024-10-16 06:57:52.346353] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:52.851 [2024-10-16 06:57:52.346360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.851 [2024-10-16 06:57:52.346373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:52.851 [2024-10-16 06:57:52.346415] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346421] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346427] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:52.852 [2024-10-16 06:57:52.346430] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:52.852 [2024-10-16 06:57:52.346432] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.852 [2024-10-16 06:57:52.346437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346462] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:52.852 [2024-10-16 06:57:52.346468] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346474] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346479] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.852 [2024-10-16 06:57:52.346482] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.852 [2024-10-16 06:57:52.346484] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.852 [2024-10-16 06:57:52.346489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346525] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346530] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.852 [2024-10-16 06:57:52.346533] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.852 [2024-10-16 06:57:52.346535] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.852 [2024-10-16 06:57:52.346539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346554] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346564] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346568] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346580] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:52.852 [2024-10-16 06:57:52.346583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:52.852 [2024-10-16 06:57:52.346587] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:52.852 [2024-10-16 06:57:52.346602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346674] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:52.852 [2024-10-16 06:57:52.346679] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:52.852 [2024-10-16 06:57:52.346681] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:52.852 [2024-10-16 06:57:52.346684] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:52.852 [2024-10-16 06:57:52.346686] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:52.852 [2024-10-16 06:57:52.346690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:52.852 [2024-10-16 06:57:52.346696] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:52.852 [2024-10-16 06:57:52.346699] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:52.852 [2024-10-16 06:57:52.346701] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.852 [2024-10-16 06:57:52.346705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346711] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:52.852 [2024-10-16 06:57:52.346714] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.852 [2024-10-16 06:57:52.346716] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.852 [2024-10-16 06:57:52.346720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346726] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:52.852 [2024-10-16 06:57:52.346729] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:52.852 [2024-10-16 06:57:52.346731] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.852 [2024-10-16 06:57:52.346736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:52.852 [2024-10-16 06:57:52.346741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:52.852 [2024-10-16 06:57:52.346762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:52.852 ===================================================== 00:14:52.852 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:52.852 ===================================================== 00:14:52.852 Controller Capabilities/Features 00:14:52.852 ================================ 00:14:52.852 Vendor ID: 4e58 00:14:52.852 Subsystem Vendor ID: 4e58 00:14:52.852 Serial Number: SPDK1 00:14:52.852 Model Number: SPDK bdev Controller 00:14:52.852 Firmware Version: 25.01 00:14:52.852 Recommended Arb Burst: 6 00:14:52.852 IEEE OUI Identifier: 8d 6b 50 00:14:52.852 Multi-path I/O 00:14:52.852 May have multiple subsystem ports: Yes 00:14:52.852 May have multiple controllers: Yes 00:14:52.852 Associated with SR-IOV VF: No 00:14:52.852 Max Data Transfer Size: 131072 00:14:52.852 Max Number of Namespaces: 32 00:14:52.852 Max Number of I/O Queues: 127 00:14:52.852 NVMe Specification Version (VS): 1.3 00:14:52.852 NVMe Specification Version (Identify): 1.3 00:14:52.852 Maximum Queue Entries: 256 00:14:52.852 Contiguous Queues Required: Yes 00:14:52.852 Arbitration Mechanisms Supported 00:14:52.852 Weighted Round Robin: Not Supported 00:14:52.852 Vendor Specific: Not Supported 00:14:52.852 Reset Timeout: 15000 ms 00:14:52.852 Doorbell Stride: 4 bytes 00:14:52.852 NVM Subsystem Reset: Not Supported 00:14:52.852 Command Sets Supported 00:14:52.852 NVM Command Set: Supported 00:14:52.852 Boot Partition: Not Supported 00:14:52.852 Memory Page Size Minimum: 4096 bytes 00:14:52.852 Memory Page Size Maximum: 4096 bytes 00:14:52.852 Persistent Memory Region: Not Supported 00:14:52.852 Optional Asynchronous Events Supported 00:14:52.852 Namespace Attribute Notices: Supported 00:14:52.852 Firmware Activation Notices: Not Supported 00:14:52.852 ANA Change Notices: Not Supported 00:14:52.852 PLE Aggregate Log Change Notices: Not Supported 00:14:52.852 LBA Status Info Alert Notices: Not Supported 00:14:52.852 EGE Aggregate Log Change Notices: Not Supported 00:14:52.852 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.852 Zone Descriptor Change Notices: Not Supported 00:14:52.852 Discovery Log Change Notices: Not Supported 00:14:52.852 Controller Attributes 00:14:52.852 128-bit Host Identifier: Supported 00:14:52.852 Non-Operational Permissive Mode: Not Supported 00:14:52.852 NVM Sets: Not Supported 00:14:52.852 Read Recovery Levels: Not Supported 00:14:52.852 Endurance Groups: Not Supported 00:14:52.852 Predictable Latency Mode: Not Supported 00:14:52.852 Traffic Based Keep ALive: Not Supported 00:14:52.852 Namespace Granularity: Not Supported 00:14:52.852 SQ Associations: Not Supported 00:14:52.852 UUID List: Not Supported 00:14:52.852 Multi-Domain Subsystem: Not Supported 00:14:52.852 Fixed Capacity Management: Not Supported 00:14:52.852 Variable Capacity Management: Not Supported 00:14:52.852 Delete Endurance Group: Not Supported 00:14:52.852 Delete NVM Set: Not Supported 00:14:52.852 Extended LBA Formats Supported: Not Supported 00:14:52.852 Flexible Data Placement Supported: Not Supported 00:14:52.852 00:14:52.852 Controller Memory Buffer Support 00:14:52.852 ================================ 00:14:52.853 Supported: No 00:14:52.853 00:14:52.853 Persistent Memory Region Support 00:14:52.853 ================================ 00:14:52.853 Supported: No 00:14:52.853 00:14:52.853 Admin Command Set Attributes 00:14:52.853 ============================ 00:14:52.853 Security Send/Receive: Not Supported 00:14:52.853 Format NVM: Not Supported 00:14:52.853 Firmware Activate/Download: Not Supported 00:14:52.853 Namespace Management: Not Supported 00:14:52.853 Device Self-Test: Not Supported 00:14:52.853 Directives: Not Supported 00:14:52.853 NVMe-MI: Not Supported 00:14:52.853 Virtualization Management: Not Supported 00:14:52.853 Doorbell Buffer Config: Not Supported 00:14:52.853 Get LBA Status Capability: Not Supported 00:14:52.853 Command & Feature Lockdown Capability: Not Supported 00:14:52.853 Abort Command Limit: 4 00:14:52.853 Async Event Request Limit: 4 00:14:52.853 Number of Firmware Slots: N/A 00:14:52.853 Firmware Slot 1 Read-Only: N/A 00:14:52.853 Firmware Activation Without Reset: N/A 00:14:52.853 Multiple Update Detection Support: N/A 00:14:52.853 Firmware Update Granularity: No Information Provided 00:14:52.853 Per-Namespace SMART Log: No 00:14:52.853 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.853 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:52.853 Command Effects Log Page: Supported 00:14:52.853 Get Log Page Extended Data: Supported 00:14:52.853 Telemetry Log Pages: Not Supported 00:14:52.853 Persistent Event Log Pages: Not Supported 00:14:52.853 Supported Log Pages Log Page: May Support 00:14:52.853 Commands Supported & Effects Log Page: Not Supported 00:14:52.853 Feature Identifiers & Effects Log Page:May Support 00:14:52.853 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.853 Data Area 4 for Telemetry Log: Not Supported 00:14:52.853 Error Log Page Entries Supported: 128 00:14:52.853 Keep Alive: Supported 00:14:52.853 Keep Alive Granularity: 10000 ms 00:14:52.853 00:14:52.853 NVM Command Set Attributes 00:14:52.853 ========================== 00:14:52.853 Submission Queue Entry Size 00:14:52.853 Max: 64 00:14:52.853 Min: 64 00:14:52.853 Completion Queue Entry Size 00:14:52.853 Max: 16 00:14:52.853 Min: 16 00:14:52.853 Number of Namespaces: 32 00:14:52.853 Compare Command: Supported 00:14:52.853 Write Uncorrectable Command: Not Supported 00:14:52.853 Dataset Management Command: Supported 00:14:52.853 Write Zeroes Command: Supported 00:14:52.853 Set Features Save Field: Not Supported 00:14:52.853 Reservations: Not Supported 00:14:52.853 Timestamp: Not Supported 00:14:52.853 Copy: Supported 00:14:52.853 Volatile Write Cache: Present 00:14:52.853 Atomic Write Unit (Normal): 1 00:14:52.853 Atomic Write Unit (PFail): 1 00:14:52.853 Atomic Compare & Write Unit: 1 00:14:52.853 Fused Compare & Write: Supported 00:14:52.853 Scatter-Gather List 00:14:52.853 SGL Command Set: Supported (Dword aligned) 00:14:52.853 SGL Keyed: Not Supported 00:14:52.853 SGL Bit Bucket Descriptor: Not Supported 00:14:52.853 SGL Metadata Pointer: Not Supported 00:14:52.853 Oversized SGL: Not Supported 00:14:52.853 SGL Metadata Address: Not Supported 00:14:52.853 SGL Offset: Not Supported 00:14:52.853 Transport SGL Data Block: Not Supported 00:14:52.853 Replay Protected Memory Block: Not Supported 00:14:52.853 00:14:52.853 Firmware Slot Information 00:14:52.853 ========================= 00:14:52.853 Active slot: 1 00:14:52.853 Slot 1 Firmware Revision: 25.01 00:14:52.853 00:14:52.853 00:14:52.853 Commands Supported and Effects 00:14:52.853 ============================== 00:14:52.853 Admin Commands 00:14:52.853 -------------- 00:14:52.853 Get Log Page (02h): Supported 00:14:52.853 Identify (06h): Supported 00:14:52.853 Abort (08h): Supported 00:14:52.853 Set Features (09h): Supported 00:14:52.853 Get Features (0Ah): Supported 00:14:52.853 Asynchronous Event Request (0Ch): Supported 00:14:52.853 Keep Alive (18h): Supported 00:14:52.853 I/O Commands 00:14:52.853 ------------ 00:14:52.853 Flush (00h): Supported LBA-Change 00:14:52.853 Write (01h): Supported LBA-Change 00:14:52.853 Read (02h): Supported 00:14:52.853 Compare (05h): Supported 00:14:52.853 Write Zeroes (08h): Supported LBA-Change 00:14:52.853 Dataset Management (09h): Supported LBA-Change 00:14:52.853 Copy (19h): Supported LBA-Change 00:14:52.853 00:14:52.853 Error Log 00:14:52.853 ========= 00:14:52.853 00:14:52.853 Arbitration 00:14:52.853 =========== 00:14:52.853 Arbitration Burst: 1 00:14:52.853 00:14:52.853 Power Management 00:14:52.853 ================ 00:14:52.853 Number of Power States: 1 00:14:52.853 Current Power State: Power State #0 00:14:52.853 Power State #0: 00:14:52.853 Max Power: 0.00 W 00:14:52.853 Non-Operational State: Operational 00:14:52.853 Entry Latency: Not Reported 00:14:52.853 Exit Latency: Not Reported 00:14:52.853 Relative Read Throughput: 0 00:14:52.853 Relative Read Latency: 0 00:14:52.853 Relative Write Throughput: 0 00:14:52.853 Relative Write Latency: 0 00:14:52.853 Idle Power: Not Reported 00:14:52.853 Active Power: Not Reported 00:14:52.853 Non-Operational Permissive Mode: Not Supported 00:14:52.853 00:14:52.853 Health Information 00:14:52.853 ================== 00:14:52.853 Critical Warnings: 00:14:52.853 Available Spare Space: OK 00:14:52.853 Temperature: OK 00:14:52.853 Device Reliability: OK 00:14:52.853 Read Only: No 00:14:52.853 Volatile Memory Backup: OK 00:14:52.853 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:52.853 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:52.853 Available Spare: 0% 00:14:52.853 Available Sp[2024-10-16 06:57:52.346834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:52.853 [2024-10-16 06:57:52.346850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:52.853 [2024-10-16 06:57:52.346873] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:52.853 [2024-10-16 06:57:52.346880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.853 [2024-10-16 06:57:52.346885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.853 [2024-10-16 06:57:52.346890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.853 [2024-10-16 06:57:52.346894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.853 [2024-10-16 06:57:52.348850] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:52.853 [2024-10-16 06:57:52.348861] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:52.853 [2024-10-16 06:57:52.349138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.853 [2024-10-16 06:57:52.349176] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:52.853 [2024-10-16 06:57:52.349181] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:53.115 [2024-10-16 06:57:52.350150] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:53.115 [2024-10-16 06:57:52.350159] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:53.115 [2024-10-16 06:57:52.350213] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:53.115 [2024-10-16 06:57:52.352180] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:53.115 are Threshold: 0% 00:14:53.115 Life Percentage Used: 0% 00:14:53.115 Data Units Read: 0 00:14:53.115 Data Units Written: 0 00:14:53.115 Host Read Commands: 0 00:14:53.115 Host Write Commands: 0 00:14:53.115 Controller Busy Time: 0 minutes 00:14:53.115 Power Cycles: 0 00:14:53.115 Power On Hours: 0 hours 00:14:53.115 Unsafe Shutdowns: 0 00:14:53.115 Unrecoverable Media Errors: 0 00:14:53.115 Lifetime Error Log Entries: 0 00:14:53.115 Warning Temperature Time: 0 minutes 00:14:53.115 Critical Temperature Time: 0 minutes 00:14:53.115 00:14:53.115 Number of Queues 00:14:53.115 ================ 00:14:53.115 Number of I/O Submission Queues: 127 00:14:53.115 Number of I/O Completion Queues: 127 00:14:53.115 00:14:53.115 Active Namespaces 00:14:53.115 ================= 00:14:53.115 Namespace ID:1 00:14:53.115 Error Recovery Timeout: Unlimited 00:14:53.115 Command Set Identifier: NVM (00h) 00:14:53.115 Deallocate: Supported 00:14:53.115 Deallocated/Unwritten Error: Not Supported 00:14:53.115 Deallocated Read Value: Unknown 00:14:53.115 Deallocate in Write Zeroes: Not Supported 00:14:53.115 Deallocated Guard Field: 0xFFFF 00:14:53.115 Flush: Supported 00:14:53.115 Reservation: Supported 00:14:53.115 Namespace Sharing Capabilities: Multiple Controllers 00:14:53.115 Size (in LBAs): 131072 (0GiB) 00:14:53.115 Capacity (in LBAs): 131072 (0GiB) 00:14:53.115 Utilization (in LBAs): 131072 (0GiB) 00:14:53.115 NGUID: F69CA98C3F2B43BCB6843273E4746423 00:14:53.115 UUID: f69ca98c-3f2b-43bc-b684-3273e4746423 00:14:53.115 Thin Provisioning: Not Supported 00:14:53.115 Per-NS Atomic Units: Yes 00:14:53.115 Atomic Boundary Size (Normal): 0 00:14:53.115 Atomic Boundary Size (PFail): 0 00:14:53.115 Atomic Boundary Offset: 0 00:14:53.115 Maximum Single Source Range Length: 65535 00:14:53.115 Maximum Copy Length: 65535 00:14:53.115 Maximum Source Range Count: 1 00:14:53.115 NGUID/EUI64 Never Reused: No 00:14:53.115 Namespace Write Protected: No 00:14:53.115 Number of LBA Formats: 1 00:14:53.115 Current LBA Format: LBA Format #00 00:14:53.115 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.115 00:14:53.115 06:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:53.115 [2024-10-16 06:57:52.528479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:58.402 Initializing NVMe Controllers 00:14:58.402 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:58.402 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:58.402 Initialization complete. Launching workers. 00:14:58.402 ======================================================== 00:14:58.402 Latency(us) 00:14:58.402 Device Information : IOPS MiB/s Average min max 00:14:58.402 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39963.29 156.11 3202.81 852.39 7771.90 00:14:58.402 ======================================================== 00:14:58.402 Total : 39963.29 156.11 3202.81 852.39 7771.90 00:14:58.402 00:14:58.402 [2024-10-16 06:57:57.546644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:58.402 06:57:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:58.402 [2024-10-16 06:57:57.729489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:03.686 Initializing NVMe Controllers 00:15:03.686 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:03.686 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:03.686 Initialization complete. Launching workers. 00:15:03.686 ======================================================== 00:15:03.686 Latency(us) 00:15:03.686 Device Information : IOPS MiB/s Average min max 00:15:03.686 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.95 62.72 7977.66 6984.57 8028.29 00:15:03.686 ======================================================== 00:15:03.686 Total : 16055.95 62.72 7977.66 6984.57 8028.29 00:15:03.686 00:15:03.686 [2024-10-16 06:58:02.769826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:03.686 06:58:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:03.686 [2024-10-16 06:58:02.963679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.963 [2024-10-16 06:58:08.047093] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.963 Initializing NVMe Controllers 00:15:08.963 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.963 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.963 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:08.963 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:08.963 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:08.963 Initialization complete. Launching workers. 00:15:08.963 Starting thread on core 2 00:15:08.963 Starting thread on core 3 00:15:08.963 Starting thread on core 1 00:15:08.963 06:58:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:08.963 [2024-10-16 06:58:08.278178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.260 [2024-10-16 06:58:11.339981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.260 Initializing NVMe Controllers 00:15:12.260 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.260 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.260 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:12.260 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:12.260 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:12.260 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:12.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:12.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:12.260 Initialization complete. Launching workers. 00:15:12.260 Starting thread on core 1 with urgent priority queue 00:15:12.260 Starting thread on core 2 with urgent priority queue 00:15:12.260 Starting thread on core 3 with urgent priority queue 00:15:12.260 Starting thread on core 0 with urgent priority queue 00:15:12.260 SPDK bdev Controller (SPDK1 ) core 0: 12273.33 IO/s 8.15 secs/100000 ios 00:15:12.260 SPDK bdev Controller (SPDK1 ) core 1: 10940.33 IO/s 9.14 secs/100000 ios 00:15:12.260 SPDK bdev Controller (SPDK1 ) core 2: 10505.00 IO/s 9.52 secs/100000 ios 00:15:12.260 SPDK bdev Controller (SPDK1 ) core 3: 8070.00 IO/s 12.39 secs/100000 ios 00:15:12.260 ======================================================== 00:15:12.260 00:15:12.260 06:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:12.260 [2024-10-16 06:58:11.562647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.260 Initializing NVMe Controllers 00:15:12.260 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.260 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.260 Namespace ID: 1 size: 0GB 00:15:12.260 Initialization complete. 00:15:12.260 INFO: using host memory buffer for IO 00:15:12.260 Hello world! 00:15:12.261 [2024-10-16 06:58:11.596865] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.261 06:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:12.522 [2024-10-16 06:58:11.819231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.464 Initializing NVMe Controllers 00:15:13.464 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.464 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.464 Initialization complete. Launching workers. 00:15:13.464 submit (in ns) avg, min, max = 6208.8, 2840.0, 3999296.7 00:15:13.464 complete (in ns) avg, min, max = 17309.3, 1635.0, 3999298.3 00:15:13.464 00:15:13.464 Submit histogram 00:15:13.464 ================ 00:15:13.464 Range in us Cumulative Count 00:15:13.464 2.840 - 2.853: 0.0150% ( 3) 00:15:13.464 2.853 - 2.867: 0.2005% ( 37) 00:15:13.464 2.867 - 2.880: 1.2229% ( 204) 00:15:13.464 2.880 - 2.893: 3.4533% ( 445) 00:15:13.464 2.893 - 2.907: 7.6985% ( 847) 00:15:13.464 2.907 - 2.920: 13.0563% ( 1069) 00:15:13.464 2.920 - 2.933: 20.1885% ( 1423) 00:15:13.464 2.933 - 2.947: 27.1602% ( 1391) 00:15:13.464 2.947 - 2.960: 33.0794% ( 1181) 00:15:13.464 2.960 - 2.973: 38.1666% ( 1015) 00:15:13.464 2.973 - 2.987: 43.3240% ( 1029) 00:15:13.464 2.987 - 3.000: 48.4763% ( 1028) 00:15:13.464 3.000 - 3.013: 54.7664% ( 1255) 00:15:13.464 3.013 - 3.027: 62.7406% ( 1591) 00:15:13.464 3.027 - 3.040: 71.4916% ( 1746) 00:15:13.464 3.040 - 3.053: 80.0421% ( 1706) 00:15:13.464 3.053 - 3.067: 87.0940% ( 1407) 00:15:13.464 3.067 - 3.080: 92.5621% ( 1091) 00:15:13.464 3.080 - 3.093: 95.9804% ( 682) 00:15:13.464 3.093 - 3.107: 98.0002% ( 403) 00:15:13.464 3.107 - 3.120: 98.9425% ( 188) 00:15:13.464 3.120 - 3.133: 99.3434% ( 80) 00:15:13.464 3.133 - 3.147: 99.4787% ( 27) 00:15:13.464 3.147 - 3.160: 99.5038% ( 5) 00:15:13.464 3.160 - 3.173: 99.5389% ( 7) 00:15:13.464 3.173 - 3.187: 99.5439% ( 1) 00:15:13.464 3.187 - 3.200: 99.5489% ( 1) 00:15:13.464 3.253 - 3.267: 99.5539% ( 1) 00:15:13.464 3.293 - 3.307: 99.5589% ( 1) 00:15:13.464 3.347 - 3.360: 99.5690% ( 2) 00:15:13.464 3.387 - 3.400: 99.5740% ( 1) 00:15:13.464 3.467 - 3.493: 99.5790% ( 1) 00:15:13.464 3.600 - 3.627: 99.5840% ( 1) 00:15:13.464 3.707 - 3.733: 99.5890% ( 1) 00:15:13.464 3.733 - 3.760: 99.5940% ( 1) 00:15:13.464 3.840 - 3.867: 99.5990% ( 1) 00:15:13.464 4.000 - 4.027: 99.6040% ( 1) 00:15:13.464 4.080 - 4.107: 99.6091% ( 1) 00:15:13.465 4.133 - 4.160: 99.6141% ( 1) 00:15:13.465 4.213 - 4.240: 99.6191% ( 1) 00:15:13.465 4.240 - 4.267: 99.6241% ( 1) 00:15:13.465 4.453 - 4.480: 99.6291% ( 1) 00:15:13.465 4.480 - 4.507: 99.6341% ( 1) 00:15:13.465 4.640 - 4.667: 99.6441% ( 2) 00:15:13.465 4.667 - 4.693: 99.6592% ( 3) 00:15:13.465 4.853 - 4.880: 99.6642% ( 1) 00:15:13.465 4.880 - 4.907: 99.6692% ( 1) 00:15:13.465 4.907 - 4.933: 99.6742% ( 1) 00:15:13.465 5.040 - 5.067: 99.6842% ( 2) 00:15:13.465 5.067 - 5.093: 99.6893% ( 1) 00:15:13.465 5.120 - 5.147: 99.6943% ( 1) 00:15:13.465 5.227 - 5.253: 99.6993% ( 1) 00:15:13.465 5.467 - 5.493: 99.7043% ( 1) 00:15:13.465 5.493 - 5.520: 99.7093% ( 1) 00:15:13.465 5.627 - 5.653: 99.7193% ( 2) 00:15:13.465 5.707 - 5.733: 99.7243% ( 1) 00:15:13.465 5.733 - 5.760: 99.7294% ( 1) 00:15:13.465 5.787 - 5.813: 99.7344% ( 1) 00:15:13.465 5.813 - 5.840: 99.7494% ( 3) 00:15:13.465 5.867 - 5.893: 99.7594% ( 2) 00:15:13.465 5.947 - 5.973: 99.7644% ( 1) 00:15:13.465 5.973 - 6.000: 99.7694% ( 1) 00:15:13.465 6.000 - 6.027: 99.7745% ( 1) 00:15:13.465 6.053 - 6.080: 99.7795% ( 1) 00:15:13.465 6.107 - 6.133: 99.7845% ( 1) 00:15:13.465 6.240 - 6.267: 99.7995% ( 3) 00:15:13.465 6.267 - 6.293: 99.8095% ( 2) 00:15:13.465 6.293 - 6.320: 99.8246% ( 3) 00:15:13.465 6.347 - 6.373: 99.8296% ( 1) 00:15:13.465 6.427 - 6.453: 99.8396% ( 2) 00:15:13.465 6.453 - 6.480: 99.8446% ( 1) 00:15:13.465 6.480 - 6.507: 99.8496% ( 1) 00:15:13.465 6.560 - 6.587: 99.8547% ( 1) 00:15:13.465 6.587 - 6.613: 99.8597% ( 1) 00:15:13.465 6.613 - 6.640: 99.8647% ( 1) 00:15:13.465 6.640 - 6.667: 99.8697% ( 1) 00:15:13.465 6.747 - 6.773: 99.8747% ( 1) 00:15:13.465 6.800 - 6.827: 99.8797% ( 1) 00:15:13.465 6.827 - 6.880: 99.8847% ( 1) 00:15:13.465 [2024-10-16 06:58:12.840027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.465 6.987 - 7.040: 99.8897% ( 1) 00:15:13.465 7.360 - 7.413: 99.8947% ( 1) 00:15:13.465 7.787 - 7.840: 99.8998% ( 1) 00:15:13.465 8.107 - 8.160: 99.9048% ( 1) 00:15:13.465 8.160 - 8.213: 99.9098% ( 1) 00:15:13.465 10.133 - 10.187: 99.9148% ( 1) 00:15:13.465 11.253 - 11.307: 99.9198% ( 1) 00:15:13.465 3986.773 - 4014.080: 100.0000% ( 16) 00:15:13.465 00:15:13.465 Complete histogram 00:15:13.465 ================== 00:15:13.465 Range in us Cumulative Count 00:15:13.465 1.633 - 1.640: 0.0050% ( 1) 00:15:13.465 1.640 - 1.647: 0.0501% ( 9) 00:15:13.465 1.647 - 1.653: 0.2306% ( 36) 00:15:13.465 1.653 - 1.660: 0.4360% ( 41) 00:15:13.465 1.660 - 1.667: 0.5714% ( 27) 00:15:13.465 1.667 - 1.673: 0.6616% ( 18) 00:15:13.465 1.673 - 1.680: 0.6967% ( 7) 00:15:13.465 1.680 - 1.687: 0.7368% ( 8) 00:15:13.465 1.687 - 1.693: 0.7468% ( 2) 00:15:13.465 1.693 - 1.700: 1.0876% ( 68) 00:15:13.465 1.700 - 1.707: 21.9928% ( 4171) 00:15:13.465 1.707 - 1.720: 57.8087% ( 7146) 00:15:13.465 1.720 - 1.733: 83.4202% ( 5110) 00:15:13.465 1.733 - 1.747: 93.5295% ( 2017) 00:15:13.465 1.747 - 1.760: 96.4415% ( 581) 00:15:13.465 1.760 - 1.773: 97.7646% ( 264) 00:15:13.465 1.773 - 1.787: 98.7620% ( 199) 00:15:13.465 1.787 - 1.800: 99.1730% ( 82) 00:15:13.465 1.800 - 1.813: 99.2883% ( 23) 00:15:13.465 1.813 - 1.827: 99.3434% ( 11) 00:15:13.465 1.827 - 1.840: 99.3484% ( 1) 00:15:13.465 1.840 - 1.853: 99.3585% ( 2) 00:15:13.465 1.867 - 1.880: 99.3635% ( 1) 00:15:13.465 4.027 - 4.053: 99.3685% ( 1) 00:15:13.465 4.107 - 4.133: 99.3735% ( 1) 00:15:13.465 4.240 - 4.267: 99.3785% ( 1) 00:15:13.465 4.347 - 4.373: 99.3835% ( 1) 00:15:13.465 4.427 - 4.453: 99.3885% ( 1) 00:15:13.465 4.453 - 4.480: 99.3986% ( 2) 00:15:13.465 4.507 - 4.533: 99.4086% ( 2) 00:15:13.465 4.587 - 4.613: 99.4136% ( 1) 00:15:13.465 4.613 - 4.640: 99.4186% ( 1) 00:15:13.465 4.640 - 4.667: 99.4236% ( 1) 00:15:13.465 4.667 - 4.693: 99.4286% ( 1) 00:15:13.465 4.693 - 4.720: 99.4336% ( 1) 00:15:13.465 4.773 - 4.800: 99.4387% ( 1) 00:15:13.465 4.827 - 4.853: 99.4437% ( 1) 00:15:13.465 4.907 - 4.933: 99.4487% ( 1) 00:15:13.465 4.933 - 4.960: 99.4587% ( 2) 00:15:13.465 4.960 - 4.987: 99.4637% ( 1) 00:15:13.465 5.067 - 5.093: 99.4687% ( 1) 00:15:13.465 5.120 - 5.147: 99.4737% ( 1) 00:15:13.465 5.147 - 5.173: 99.4787% ( 1) 00:15:13.465 5.200 - 5.227: 99.4838% ( 1) 00:15:13.465 5.227 - 5.253: 99.4888% ( 1) 00:15:13.465 5.253 - 5.280: 99.4938% ( 1) 00:15:13.465 5.307 - 5.333: 99.5038% ( 2) 00:15:13.465 5.360 - 5.387: 99.5088% ( 1) 00:15:13.465 5.387 - 5.413: 99.5188% ( 2) 00:15:13.465 5.467 - 5.493: 99.5239% ( 1) 00:15:13.465 5.493 - 5.520: 99.5339% ( 2) 00:15:13.465 5.627 - 5.653: 99.5389% ( 1) 00:15:13.465 5.813 - 5.840: 99.5439% ( 1) 00:15:13.465 5.840 - 5.867: 99.5489% ( 1) 00:15:13.465 5.867 - 5.893: 99.5539% ( 1) 00:15:13.465 5.893 - 5.920: 99.5589% ( 1) 00:15:13.465 6.000 - 6.027: 99.5640% ( 1) 00:15:13.465 6.027 - 6.053: 99.5790% ( 3) 00:15:13.465 6.373 - 6.400: 99.5840% ( 1) 00:15:13.465 6.587 - 6.613: 99.5890% ( 1) 00:15:13.465 6.827 - 6.880: 99.5940% ( 1) 00:15:13.465 6.880 - 6.933: 99.5990% ( 1) 00:15:13.465 9.707 - 9.760: 99.6040% ( 1) 00:15:13.465 12.160 - 12.213: 99.6091% ( 1) 00:15:13.465 3263.147 - 3276.800: 99.6141% ( 1) 00:15:13.465 3986.773 - 4014.080: 100.0000% ( 77) 00:15:13.465 00:15:13.465 06:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:13.465 06:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:13.465 06:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:13.465 06:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:13.465 06:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:13.727 [ 00:15:13.727 { 00:15:13.727 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:13.727 "subtype": "Discovery", 00:15:13.727 "listen_addresses": [], 00:15:13.727 "allow_any_host": true, 00:15:13.727 "hosts": [] 00:15:13.727 }, 00:15:13.727 { 00:15:13.727 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:13.727 "subtype": "NVMe", 00:15:13.727 "listen_addresses": [ 00:15:13.727 { 00:15:13.727 "trtype": "VFIOUSER", 00:15:13.727 "adrfam": "IPv4", 00:15:13.728 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:13.728 "trsvcid": "0" 00:15:13.728 } 00:15:13.728 ], 00:15:13.728 "allow_any_host": true, 00:15:13.728 "hosts": [], 00:15:13.728 "serial_number": "SPDK1", 00:15:13.728 "model_number": "SPDK bdev Controller", 00:15:13.728 "max_namespaces": 32, 00:15:13.728 "min_cntlid": 1, 00:15:13.728 "max_cntlid": 65519, 00:15:13.728 "namespaces": [ 00:15:13.728 { 00:15:13.728 "nsid": 1, 00:15:13.728 "bdev_name": "Malloc1", 00:15:13.728 "name": "Malloc1", 00:15:13.728 "nguid": "F69CA98C3F2B43BCB6843273E4746423", 00:15:13.728 "uuid": "f69ca98c-3f2b-43bc-b684-3273e4746423" 00:15:13.728 } 00:15:13.728 ] 00:15:13.728 }, 00:15:13.728 { 00:15:13.728 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:13.728 "subtype": "NVMe", 00:15:13.728 "listen_addresses": [ 00:15:13.728 { 00:15:13.728 "trtype": "VFIOUSER", 00:15:13.728 "adrfam": "IPv4", 00:15:13.728 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:13.728 "trsvcid": "0" 00:15:13.728 } 00:15:13.728 ], 00:15:13.728 "allow_any_host": true, 00:15:13.728 "hosts": [], 00:15:13.728 "serial_number": "SPDK2", 00:15:13.728 "model_number": "SPDK bdev Controller", 00:15:13.728 "max_namespaces": 32, 00:15:13.728 "min_cntlid": 1, 00:15:13.728 "max_cntlid": 65519, 00:15:13.728 "namespaces": [ 00:15:13.728 { 00:15:13.728 "nsid": 1, 00:15:13.728 "bdev_name": "Malloc2", 00:15:13.728 "name": "Malloc2", 00:15:13.728 "nguid": "89E2AEB613B241DAB30D3305FBF99442", 00:15:13.728 "uuid": "89e2aeb6-13b2-41da-b30d-3305fbf99442" 00:15:13.728 } 00:15:13.728 ] 00:15:13.728 } 00:15:13.728 ] 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3082958 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:13.728 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:13.728 [2024-10-16 06:58:13.209263] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.989 Malloc3 00:15:13.989 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:13.989 [2024-10-16 06:58:13.411664] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.989 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:13.989 Asynchronous Event Request test 00:15:13.989 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.989 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.989 Registering asynchronous event callbacks... 00:15:13.989 Starting namespace attribute notice tests for all controllers... 00:15:13.989 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:13.989 aer_cb - Changed Namespace 00:15:13.989 Cleaning up... 00:15:14.252 [ 00:15:14.252 { 00:15:14.252 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:14.252 "subtype": "Discovery", 00:15:14.252 "listen_addresses": [], 00:15:14.252 "allow_any_host": true, 00:15:14.252 "hosts": [] 00:15:14.252 }, 00:15:14.252 { 00:15:14.252 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:14.252 "subtype": "NVMe", 00:15:14.252 "listen_addresses": [ 00:15:14.252 { 00:15:14.252 "trtype": "VFIOUSER", 00:15:14.252 "adrfam": "IPv4", 00:15:14.252 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:14.252 "trsvcid": "0" 00:15:14.252 } 00:15:14.252 ], 00:15:14.252 "allow_any_host": true, 00:15:14.252 "hosts": [], 00:15:14.252 "serial_number": "SPDK1", 00:15:14.252 "model_number": "SPDK bdev Controller", 00:15:14.252 "max_namespaces": 32, 00:15:14.252 "min_cntlid": 1, 00:15:14.252 "max_cntlid": 65519, 00:15:14.252 "namespaces": [ 00:15:14.252 { 00:15:14.252 "nsid": 1, 00:15:14.252 "bdev_name": "Malloc1", 00:15:14.252 "name": "Malloc1", 00:15:14.252 "nguid": "F69CA98C3F2B43BCB6843273E4746423", 00:15:14.252 "uuid": "f69ca98c-3f2b-43bc-b684-3273e4746423" 00:15:14.252 }, 00:15:14.252 { 00:15:14.252 "nsid": 2, 00:15:14.252 "bdev_name": "Malloc3", 00:15:14.252 "name": "Malloc3", 00:15:14.252 "nguid": "2D626C76FCE64098AB8FD3312AD8BA50", 00:15:14.252 "uuid": "2d626c76-fce6-4098-ab8f-d3312ad8ba50" 00:15:14.252 } 00:15:14.252 ] 00:15:14.252 }, 00:15:14.252 { 00:15:14.252 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:14.252 "subtype": "NVMe", 00:15:14.252 "listen_addresses": [ 00:15:14.252 { 00:15:14.252 "trtype": "VFIOUSER", 00:15:14.252 "adrfam": "IPv4", 00:15:14.252 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:14.252 "trsvcid": "0" 00:15:14.252 } 00:15:14.252 ], 00:15:14.252 "allow_any_host": true, 00:15:14.252 "hosts": [], 00:15:14.252 "serial_number": "SPDK2", 00:15:14.252 "model_number": "SPDK bdev Controller", 00:15:14.252 "max_namespaces": 32, 00:15:14.252 "min_cntlid": 1, 00:15:14.252 "max_cntlid": 65519, 00:15:14.252 "namespaces": [ 00:15:14.252 { 00:15:14.252 "nsid": 1, 00:15:14.252 "bdev_name": "Malloc2", 00:15:14.252 "name": "Malloc2", 00:15:14.252 "nguid": "89E2AEB613B241DAB30D3305FBF99442", 00:15:14.252 "uuid": "89e2aeb6-13b2-41da-b30d-3305fbf99442" 00:15:14.252 } 00:15:14.252 ] 00:15:14.252 } 00:15:14.252 ] 00:15:14.252 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3082958 00:15:14.252 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.252 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:14.252 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:14.252 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:14.252 [2024-10-16 06:58:13.640803] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:15:14.252 [2024-10-16 06:58:13.640857] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083064 ] 00:15:14.252 [2024-10-16 06:58:13.666337] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:14.252 [2024-10-16 06:58:13.670965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.252 [2024-10-16 06:58:13.670981] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f47d37c3000 00:15:14.252 [2024-10-16 06:58:13.671966] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.252 [2024-10-16 06:58:13.672972] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.252 [2024-10-16 06:58:13.673976] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.252 [2024-10-16 06:58:13.674979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.252 [2024-10-16 06:58:13.675988] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.252 [2024-10-16 06:58:13.676995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.252 [2024-10-16 06:58:13.677996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.252 [2024-10-16 06:58:13.679007] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.252 [2024-10-16 06:58:13.680014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.252 [2024-10-16 06:58:13.680024] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f47d37b8000 00:15:14.252 [2024-10-16 06:58:13.680941] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.252 [2024-10-16 06:58:13.690320] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:14.252 [2024-10-16 06:58:13.690339] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:14.253 [2024-10-16 06:58:13.695412] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:14.253 [2024-10-16 06:58:13.695447] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:14.253 [2024-10-16 06:58:13.695509] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:14.253 [2024-10-16 06:58:13.695522] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:14.253 [2024-10-16 06:58:13.695526] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:14.253 [2024-10-16 06:58:13.696419] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:14.253 [2024-10-16 06:58:13.696427] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:14.253 [2024-10-16 06:58:13.696432] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:14.253 [2024-10-16 06:58:13.697424] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:14.253 [2024-10-16 06:58:13.697431] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:14.253 [2024-10-16 06:58:13.697436] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:14.253 [2024-10-16 06:58:13.698427] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:14.253 [2024-10-16 06:58:13.698435] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:14.253 [2024-10-16 06:58:13.699435] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:14.253 [2024-10-16 06:58:13.699441] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:14.253 [2024-10-16 06:58:13.699445] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:14.253 [2024-10-16 06:58:13.699450] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:14.253 [2024-10-16 06:58:13.699554] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:14.253 [2024-10-16 06:58:13.699558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:14.253 [2024-10-16 06:58:13.699562] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:14.253 [2024-10-16 06:58:13.700442] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:14.253 [2024-10-16 06:58:13.701447] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:14.253 [2024-10-16 06:58:13.702456] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:14.253 [2024-10-16 06:58:13.703454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.253 [2024-10-16 06:58:13.703486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:14.253 [2024-10-16 06:58:13.704460] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:14.253 [2024-10-16 06:58:13.704467] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:14.253 [2024-10-16 06:58:13.704472] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.704487] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:14.253 [2024-10-16 06:58:13.704492] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.704502] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.253 [2024-10-16 06:58:13.704506] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.253 [2024-10-16 06:58:13.704509] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.253 [2024-10-16 06:58:13.704519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.253 [2024-10-16 06:58:13.711851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:14.253 [2024-10-16 06:58:13.711860] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:14.253 [2024-10-16 06:58:13.711864] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:14.253 [2024-10-16 06:58:13.711868] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:14.253 [2024-10-16 06:58:13.711871] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:14.253 [2024-10-16 06:58:13.711875] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:14.253 [2024-10-16 06:58:13.711878] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:14.253 [2024-10-16 06:58:13.711882] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.711887] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.711895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:14.253 [2024-10-16 06:58:13.719849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:14.253 [2024-10-16 06:58:13.719858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.253 [2024-10-16 06:58:13.719865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.253 [2024-10-16 06:58:13.719871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.253 [2024-10-16 06:58:13.719877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.253 [2024-10-16 06:58:13.719880] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.719888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.719895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:14.253 [2024-10-16 06:58:13.727849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:14.253 [2024-10-16 06:58:13.727855] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:14.253 [2024-10-16 06:58:13.727859] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.727864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.727870] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.727876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.253 [2024-10-16 06:58:13.735849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:14.253 [2024-10-16 06:58:13.735894] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.735900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.735906] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:14.253 [2024-10-16 06:58:13.735909] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:14.253 [2024-10-16 06:58:13.735912] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.253 [2024-10-16 06:58:13.735916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:14.253 [2024-10-16 06:58:13.743848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:14.253 [2024-10-16 06:58:13.743856] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:14.253 [2024-10-16 06:58:13.743865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.743871] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.743876] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.253 [2024-10-16 06:58:13.743879] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.253 [2024-10-16 06:58:13.743882] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.253 [2024-10-16 06:58:13.743886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.253 [2024-10-16 06:58:13.750855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:14.253 [2024-10-16 06:58:13.750867] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.750873] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:14.253 [2024-10-16 06:58:13.750878] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.253 [2024-10-16 06:58:13.750881] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.253 [2024-10-16 06:58:13.750885] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.253 [2024-10-16 06:58:13.750890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.516 [2024-10-16 06:58:13.759849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:14.516 [2024-10-16 06:58:13.759857] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:14.516 [2024-10-16 06:58:13.759862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:14.516 [2024-10-16 06:58:13.759870] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:14.516 [2024-10-16 06:58:13.759875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:14.516 [2024-10-16 06:58:13.759878] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:14.516 [2024-10-16 06:58:13.759882] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:14.516 [2024-10-16 06:58:13.759886] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:14.516 [2024-10-16 06:58:13.759889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:14.516 [2024-10-16 06:58:13.759893] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:14.516 [2024-10-16 06:58:13.759906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-10-16 06:58:13.767851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:14.516 [2024-10-16 06:58:13.767867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-10-16 06:58:13.775848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:14.516 [2024-10-16 06:58:13.775858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-10-16 06:58:13.783850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:14.516 [2024-10-16 06:58:13.783860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-10-16 06:58:13.791848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:14.516 [2024-10-16 06:58:13.791860] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:14.516 [2024-10-16 06:58:13.791864] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:14.516 [2024-10-16 06:58:13.791866] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:14.516 [2024-10-16 06:58:13.791869] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:14.516 [2024-10-16 06:58:13.791871] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:14.516 [2024-10-16 06:58:13.791876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:14.516 [2024-10-16 06:58:13.791884] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:14.516 [2024-10-16 06:58:13.791887] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:14.516 [2024-10-16 06:58:13.791890] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.516 [2024-10-16 06:58:13.791894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:14.516 [2024-10-16 06:58:13.791899] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:14.516 [2024-10-16 06:58:13.791903] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.516 [2024-10-16 06:58:13.791905] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.516 [2024-10-16 06:58:13.791909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.516 [2024-10-16 06:58:13.791915] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:14.516 [2024-10-16 06:58:13.791918] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:14.516 [2024-10-16 06:58:13.791920] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.516 [2024-10-16 06:58:13.791925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:14.516 [2024-10-16 06:58:13.799850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:14.516 [2024-10-16 06:58:13.799862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:14.516 [2024-10-16 06:58:13.799869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:14.516 [2024-10-16 06:58:13.799874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:14.516 ===================================================== 00:15:14.516 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.516 ===================================================== 00:15:14.516 Controller Capabilities/Features 00:15:14.516 ================================ 00:15:14.516 Vendor ID: 4e58 00:15:14.516 Subsystem Vendor ID: 4e58 00:15:14.516 Serial Number: SPDK2 00:15:14.516 Model Number: SPDK bdev Controller 00:15:14.516 Firmware Version: 25.01 00:15:14.516 Recommended Arb Burst: 6 00:15:14.516 IEEE OUI Identifier: 8d 6b 50 00:15:14.516 Multi-path I/O 00:15:14.516 May have multiple subsystem ports: Yes 00:15:14.516 May have multiple controllers: Yes 00:15:14.516 Associated with SR-IOV VF: No 00:15:14.516 Max Data Transfer Size: 131072 00:15:14.516 Max Number of Namespaces: 32 00:15:14.516 Max Number of I/O Queues: 127 00:15:14.516 NVMe Specification Version (VS): 1.3 00:15:14.516 NVMe Specification Version (Identify): 1.3 00:15:14.516 Maximum Queue Entries: 256 00:15:14.516 Contiguous Queues Required: Yes 00:15:14.516 Arbitration Mechanisms Supported 00:15:14.516 Weighted Round Robin: Not Supported 00:15:14.516 Vendor Specific: Not Supported 00:15:14.516 Reset Timeout: 15000 ms 00:15:14.516 Doorbell Stride: 4 bytes 00:15:14.516 NVM Subsystem Reset: Not Supported 00:15:14.516 Command Sets Supported 00:15:14.516 NVM Command Set: Supported 00:15:14.516 Boot Partition: Not Supported 00:15:14.516 Memory Page Size Minimum: 4096 bytes 00:15:14.516 Memory Page Size Maximum: 4096 bytes 00:15:14.516 Persistent Memory Region: Not Supported 00:15:14.516 Optional Asynchronous Events Supported 00:15:14.516 Namespace Attribute Notices: Supported 00:15:14.516 Firmware Activation Notices: Not Supported 00:15:14.516 ANA Change Notices: Not Supported 00:15:14.516 PLE Aggregate Log Change Notices: Not Supported 00:15:14.516 LBA Status Info Alert Notices: Not Supported 00:15:14.516 EGE Aggregate Log Change Notices: Not Supported 00:15:14.516 Normal NVM Subsystem Shutdown event: Not Supported 00:15:14.516 Zone Descriptor Change Notices: Not Supported 00:15:14.516 Discovery Log Change Notices: Not Supported 00:15:14.516 Controller Attributes 00:15:14.516 128-bit Host Identifier: Supported 00:15:14.516 Non-Operational Permissive Mode: Not Supported 00:15:14.516 NVM Sets: Not Supported 00:15:14.516 Read Recovery Levels: Not Supported 00:15:14.516 Endurance Groups: Not Supported 00:15:14.516 Predictable Latency Mode: Not Supported 00:15:14.516 Traffic Based Keep ALive: Not Supported 00:15:14.516 Namespace Granularity: Not Supported 00:15:14.516 SQ Associations: Not Supported 00:15:14.516 UUID List: Not Supported 00:15:14.516 Multi-Domain Subsystem: Not Supported 00:15:14.516 Fixed Capacity Management: Not Supported 00:15:14.516 Variable Capacity Management: Not Supported 00:15:14.516 Delete Endurance Group: Not Supported 00:15:14.516 Delete NVM Set: Not Supported 00:15:14.516 Extended LBA Formats Supported: Not Supported 00:15:14.516 Flexible Data Placement Supported: Not Supported 00:15:14.516 00:15:14.516 Controller Memory Buffer Support 00:15:14.516 ================================ 00:15:14.516 Supported: No 00:15:14.516 00:15:14.516 Persistent Memory Region Support 00:15:14.516 ================================ 00:15:14.516 Supported: No 00:15:14.516 00:15:14.516 Admin Command Set Attributes 00:15:14.516 ============================ 00:15:14.516 Security Send/Receive: Not Supported 00:15:14.517 Format NVM: Not Supported 00:15:14.517 Firmware Activate/Download: Not Supported 00:15:14.517 Namespace Management: Not Supported 00:15:14.517 Device Self-Test: Not Supported 00:15:14.517 Directives: Not Supported 00:15:14.517 NVMe-MI: Not Supported 00:15:14.517 Virtualization Management: Not Supported 00:15:14.517 Doorbell Buffer Config: Not Supported 00:15:14.517 Get LBA Status Capability: Not Supported 00:15:14.517 Command & Feature Lockdown Capability: Not Supported 00:15:14.517 Abort Command Limit: 4 00:15:14.517 Async Event Request Limit: 4 00:15:14.517 Number of Firmware Slots: N/A 00:15:14.517 Firmware Slot 1 Read-Only: N/A 00:15:14.517 Firmware Activation Without Reset: N/A 00:15:14.517 Multiple Update Detection Support: N/A 00:15:14.517 Firmware Update Granularity: No Information Provided 00:15:14.517 Per-Namespace SMART Log: No 00:15:14.517 Asymmetric Namespace Access Log Page: Not Supported 00:15:14.517 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:14.517 Command Effects Log Page: Supported 00:15:14.517 Get Log Page Extended Data: Supported 00:15:14.517 Telemetry Log Pages: Not Supported 00:15:14.517 Persistent Event Log Pages: Not Supported 00:15:14.517 Supported Log Pages Log Page: May Support 00:15:14.517 Commands Supported & Effects Log Page: Not Supported 00:15:14.517 Feature Identifiers & Effects Log Page:May Support 00:15:14.517 NVMe-MI Commands & Effects Log Page: May Support 00:15:14.517 Data Area 4 for Telemetry Log: Not Supported 00:15:14.517 Error Log Page Entries Supported: 128 00:15:14.517 Keep Alive: Supported 00:15:14.517 Keep Alive Granularity: 10000 ms 00:15:14.517 00:15:14.517 NVM Command Set Attributes 00:15:14.517 ========================== 00:15:14.517 Submission Queue Entry Size 00:15:14.517 Max: 64 00:15:14.517 Min: 64 00:15:14.517 Completion Queue Entry Size 00:15:14.517 Max: 16 00:15:14.517 Min: 16 00:15:14.517 Number of Namespaces: 32 00:15:14.517 Compare Command: Supported 00:15:14.517 Write Uncorrectable Command: Not Supported 00:15:14.517 Dataset Management Command: Supported 00:15:14.517 Write Zeroes Command: Supported 00:15:14.517 Set Features Save Field: Not Supported 00:15:14.517 Reservations: Not Supported 00:15:14.517 Timestamp: Not Supported 00:15:14.517 Copy: Supported 00:15:14.517 Volatile Write Cache: Present 00:15:14.517 Atomic Write Unit (Normal): 1 00:15:14.517 Atomic Write Unit (PFail): 1 00:15:14.517 Atomic Compare & Write Unit: 1 00:15:14.517 Fused Compare & Write: Supported 00:15:14.517 Scatter-Gather List 00:15:14.517 SGL Command Set: Supported (Dword aligned) 00:15:14.517 SGL Keyed: Not Supported 00:15:14.517 SGL Bit Bucket Descriptor: Not Supported 00:15:14.517 SGL Metadata Pointer: Not Supported 00:15:14.517 Oversized SGL: Not Supported 00:15:14.517 SGL Metadata Address: Not Supported 00:15:14.517 SGL Offset: Not Supported 00:15:14.517 Transport SGL Data Block: Not Supported 00:15:14.517 Replay Protected Memory Block: Not Supported 00:15:14.517 00:15:14.517 Firmware Slot Information 00:15:14.517 ========================= 00:15:14.517 Active slot: 1 00:15:14.517 Slot 1 Firmware Revision: 25.01 00:15:14.517 00:15:14.517 00:15:14.517 Commands Supported and Effects 00:15:14.517 ============================== 00:15:14.517 Admin Commands 00:15:14.517 -------------- 00:15:14.517 Get Log Page (02h): Supported 00:15:14.517 Identify (06h): Supported 00:15:14.517 Abort (08h): Supported 00:15:14.517 Set Features (09h): Supported 00:15:14.517 Get Features (0Ah): Supported 00:15:14.517 Asynchronous Event Request (0Ch): Supported 00:15:14.517 Keep Alive (18h): Supported 00:15:14.517 I/O Commands 00:15:14.517 ------------ 00:15:14.517 Flush (00h): Supported LBA-Change 00:15:14.517 Write (01h): Supported LBA-Change 00:15:14.517 Read (02h): Supported 00:15:14.517 Compare (05h): Supported 00:15:14.517 Write Zeroes (08h): Supported LBA-Change 00:15:14.517 Dataset Management (09h): Supported LBA-Change 00:15:14.517 Copy (19h): Supported LBA-Change 00:15:14.517 00:15:14.517 Error Log 00:15:14.517 ========= 00:15:14.517 00:15:14.517 Arbitration 00:15:14.517 =========== 00:15:14.517 Arbitration Burst: 1 00:15:14.517 00:15:14.517 Power Management 00:15:14.517 ================ 00:15:14.517 Number of Power States: 1 00:15:14.517 Current Power State: Power State #0 00:15:14.517 Power State #0: 00:15:14.517 Max Power: 0.00 W 00:15:14.517 Non-Operational State: Operational 00:15:14.517 Entry Latency: Not Reported 00:15:14.517 Exit Latency: Not Reported 00:15:14.517 Relative Read Throughput: 0 00:15:14.517 Relative Read Latency: 0 00:15:14.517 Relative Write Throughput: 0 00:15:14.517 Relative Write Latency: 0 00:15:14.517 Idle Power: Not Reported 00:15:14.517 Active Power: Not Reported 00:15:14.517 Non-Operational Permissive Mode: Not Supported 00:15:14.517 00:15:14.517 Health Information 00:15:14.517 ================== 00:15:14.517 Critical Warnings: 00:15:14.517 Available Spare Space: OK 00:15:14.517 Temperature: OK 00:15:14.517 Device Reliability: OK 00:15:14.517 Read Only: No 00:15:14.517 Volatile Memory Backup: OK 00:15:14.517 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:14.517 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:14.517 Available Spare: 0% 00:15:14.517 Available Sp[2024-10-16 06:58:13.799947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:14.517 [2024-10-16 06:58:13.807850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:14.517 [2024-10-16 06:58:13.807877] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:14.517 [2024-10-16 06:58:13.807884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-10-16 06:58:13.807889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-10-16 06:58:13.807893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-10-16 06:58:13.807898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-10-16 06:58:13.807928] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:14.517 [2024-10-16 06:58:13.807936] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:14.517 [2024-10-16 06:58:13.808935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.517 [2024-10-16 06:58:13.808971] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:14.517 [2024-10-16 06:58:13.808977] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:14.517 [2024-10-16 06:58:13.809936] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:14.517 [2024-10-16 06:58:13.809945] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:14.517 [2024-10-16 06:58:13.809990] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:14.517 [2024-10-16 06:58:13.810959] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.517 are Threshold: 0% 00:15:14.517 Life Percentage Used: 0% 00:15:14.517 Data Units Read: 0 00:15:14.517 Data Units Written: 0 00:15:14.517 Host Read Commands: 0 00:15:14.517 Host Write Commands: 0 00:15:14.517 Controller Busy Time: 0 minutes 00:15:14.517 Power Cycles: 0 00:15:14.517 Power On Hours: 0 hours 00:15:14.517 Unsafe Shutdowns: 0 00:15:14.517 Unrecoverable Media Errors: 0 00:15:14.517 Lifetime Error Log Entries: 0 00:15:14.517 Warning Temperature Time: 0 minutes 00:15:14.517 Critical Temperature Time: 0 minutes 00:15:14.517 00:15:14.517 Number of Queues 00:15:14.517 ================ 00:15:14.517 Number of I/O Submission Queues: 127 00:15:14.517 Number of I/O Completion Queues: 127 00:15:14.517 00:15:14.517 Active Namespaces 00:15:14.517 ================= 00:15:14.517 Namespace ID:1 00:15:14.517 Error Recovery Timeout: Unlimited 00:15:14.517 Command Set Identifier: NVM (00h) 00:15:14.517 Deallocate: Supported 00:15:14.517 Deallocated/Unwritten Error: Not Supported 00:15:14.517 Deallocated Read Value: Unknown 00:15:14.517 Deallocate in Write Zeroes: Not Supported 00:15:14.517 Deallocated Guard Field: 0xFFFF 00:15:14.517 Flush: Supported 00:15:14.517 Reservation: Supported 00:15:14.517 Namespace Sharing Capabilities: Multiple Controllers 00:15:14.517 Size (in LBAs): 131072 (0GiB) 00:15:14.517 Capacity (in LBAs): 131072 (0GiB) 00:15:14.517 Utilization (in LBAs): 131072 (0GiB) 00:15:14.517 NGUID: 89E2AEB613B241DAB30D3305FBF99442 00:15:14.517 UUID: 89e2aeb6-13b2-41da-b30d-3305fbf99442 00:15:14.517 Thin Provisioning: Not Supported 00:15:14.517 Per-NS Atomic Units: Yes 00:15:14.517 Atomic Boundary Size (Normal): 0 00:15:14.517 Atomic Boundary Size (PFail): 0 00:15:14.517 Atomic Boundary Offset: 0 00:15:14.517 Maximum Single Source Range Length: 65535 00:15:14.517 Maximum Copy Length: 65535 00:15:14.517 Maximum Source Range Count: 1 00:15:14.518 NGUID/EUI64 Never Reused: No 00:15:14.518 Namespace Write Protected: No 00:15:14.518 Number of LBA Formats: 1 00:15:14.518 Current LBA Format: LBA Format #00 00:15:14.518 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:14.518 00:15:14.518 06:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:14.518 [2024-10-16 06:58:13.989256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.807 Initializing NVMe Controllers 00:15:19.807 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.807 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:19.807 Initialization complete. Launching workers. 00:15:19.807 ======================================================== 00:15:19.807 Latency(us) 00:15:19.807 Device Information : IOPS MiB/s Average min max 00:15:19.807 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39977.62 156.16 3201.65 843.92 6836.35 00:15:19.807 ======================================================== 00:15:19.807 Total : 39977.62 156.16 3201.65 843.92 6836.35 00:15:19.807 00:15:19.807 [2024-10-16 06:58:19.094027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.807 06:58:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:19.807 [2024-10-16 06:58:19.276618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.094 Initializing NVMe Controllers 00:15:25.095 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:25.095 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:25.095 Initialization complete. Launching workers. 00:15:25.095 ======================================================== 00:15:25.095 Latency(us) 00:15:25.095 Device Information : IOPS MiB/s Average min max 00:15:25.095 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39997.38 156.24 3200.06 841.51 9779.16 00:15:25.095 ======================================================== 00:15:25.095 Total : 39997.38 156.24 3200.06 841.51 9779.16 00:15:25.095 00:15:25.095 [2024-10-16 06:58:24.293112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.095 06:58:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:25.095 [2024-10-16 06:58:24.479227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:30.383 [2024-10-16 06:58:29.621955] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.383 Initializing NVMe Controllers 00:15:30.383 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:30.383 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:30.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:30.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:30.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:30.383 Initialization complete. Launching workers. 00:15:30.383 Starting thread on core 2 00:15:30.383 Starting thread on core 3 00:15:30.383 Starting thread on core 1 00:15:30.383 06:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:30.383 [2024-10-16 06:58:29.855231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.684 [2024-10-16 06:58:32.905805] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.684 Initializing NVMe Controllers 00:15:33.684 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.684 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.684 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:33.684 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:33.684 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:33.684 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:33.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:33.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:33.684 Initialization complete. Launching workers. 00:15:33.684 Starting thread on core 1 with urgent priority queue 00:15:33.684 Starting thread on core 2 with urgent priority queue 00:15:33.684 Starting thread on core 3 with urgent priority queue 00:15:33.684 Starting thread on core 0 with urgent priority queue 00:15:33.684 SPDK bdev Controller (SPDK2 ) core 0: 8940.33 IO/s 11.19 secs/100000 ios 00:15:33.684 SPDK bdev Controller (SPDK2 ) core 1: 10941.67 IO/s 9.14 secs/100000 ios 00:15:33.684 SPDK bdev Controller (SPDK2 ) core 2: 14408.67 IO/s 6.94 secs/100000 ios 00:15:33.684 SPDK bdev Controller (SPDK2 ) core 3: 8916.33 IO/s 11.22 secs/100000 ios 00:15:33.684 ======================================================== 00:15:33.684 00:15:33.684 06:58:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:33.684 [2024-10-16 06:58:33.131253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.684 Initializing NVMe Controllers 00:15:33.684 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.684 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.684 Namespace ID: 1 size: 0GB 00:15:33.684 Initialization complete. 00:15:33.684 INFO: using host memory buffer for IO 00:15:33.684 Hello world! 00:15:33.685 [2024-10-16 06:58:33.141325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.685 06:58:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:33.946 [2024-10-16 06:58:33.361517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.332 Initializing NVMe Controllers 00:15:35.332 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.332 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.332 Initialization complete. Launching workers. 00:15:35.332 submit (in ns) avg, min, max = 5384.5, 2834.2, 4000790.0 00:15:35.332 complete (in ns) avg, min, max = 15415.3, 1627.5, 3998684.2 00:15:35.332 00:15:35.332 Submit histogram 00:15:35.332 ================ 00:15:35.332 Range in us Cumulative Count 00:15:35.332 2.827 - 2.840: 0.0199% ( 4) 00:15:35.332 2.840 - 2.853: 0.1842% ( 33) 00:15:35.332 2.853 - 2.867: 1.0407% ( 172) 00:15:35.332 2.867 - 2.880: 3.6349% ( 521) 00:15:35.332 2.880 - 2.893: 8.2209% ( 921) 00:15:35.332 2.893 - 2.907: 13.4293% ( 1046) 00:15:35.332 2.907 - 2.920: 18.1696% ( 952) 00:15:35.332 2.920 - 2.933: 23.3232% ( 1035) 00:15:35.332 2.933 - 2.947: 29.2636% ( 1193) 00:15:35.332 2.947 - 2.960: 35.1890% ( 1190) 00:15:35.332 2.960 - 2.973: 40.9202% ( 1151) 00:15:35.332 2.973 - 2.987: 46.6962% ( 1160) 00:15:35.332 2.987 - 3.000: 52.1585% ( 1097) 00:15:35.332 3.000 - 3.013: 59.4682% ( 1468) 00:15:35.332 3.013 - 3.027: 67.8534% ( 1684) 00:15:35.332 3.027 - 3.040: 76.2984% ( 1696) 00:15:35.332 3.040 - 3.053: 83.8520% ( 1517) 00:15:35.332 3.053 - 3.067: 89.9617% ( 1227) 00:15:35.332 3.067 - 3.080: 94.2538% ( 862) 00:15:35.332 3.080 - 3.093: 96.9427% ( 540) 00:15:35.332 3.093 - 3.107: 98.5311% ( 319) 00:15:35.332 3.107 - 3.120: 99.2182% ( 138) 00:15:35.332 3.120 - 3.133: 99.4324% ( 43) 00:15:35.332 3.133 - 3.147: 99.4971% ( 13) 00:15:35.332 3.147 - 3.160: 99.5120% ( 3) 00:15:35.332 3.160 - 3.173: 99.5270% ( 3) 00:15:35.332 3.200 - 3.213: 99.5319% ( 1) 00:15:35.332 3.240 - 3.253: 99.5469% ( 3) 00:15:35.332 3.280 - 3.293: 99.5519% ( 1) 00:15:35.332 3.293 - 3.307: 99.5568% ( 1) 00:15:35.332 3.320 - 3.333: 99.5668% ( 2) 00:15:35.332 3.333 - 3.347: 99.5718% ( 1) 00:15:35.332 3.347 - 3.360: 99.5768% ( 1) 00:15:35.332 3.413 - 3.440: 99.5817% ( 1) 00:15:35.332 3.440 - 3.467: 99.5867% ( 1) 00:15:35.332 3.467 - 3.493: 99.5917% ( 1) 00:15:35.332 3.520 - 3.547: 99.5967% ( 1) 00:15:35.332 3.573 - 3.600: 99.6017% ( 1) 00:15:35.332 3.627 - 3.653: 99.6066% ( 1) 00:15:35.332 3.733 - 3.760: 99.6116% ( 1) 00:15:35.332 3.760 - 3.787: 99.6265% ( 3) 00:15:35.332 3.787 - 3.813: 99.6315% ( 1) 00:15:35.332 3.867 - 3.893: 99.6365% ( 1) 00:15:35.332 3.893 - 3.920: 99.6415% ( 1) 00:15:35.332 4.027 - 4.053: 99.6465% ( 1) 00:15:35.332 4.187 - 4.213: 99.6514% ( 1) 00:15:35.332 4.240 - 4.267: 99.6564% ( 1) 00:15:35.332 4.293 - 4.320: 99.6714% ( 3) 00:15:35.332 4.400 - 4.427: 99.6763% ( 1) 00:15:35.332 4.427 - 4.453: 99.6813% ( 1) 00:15:35.332 4.507 - 4.533: 99.6863% ( 1) 00:15:35.332 4.533 - 4.560: 99.6913% ( 1) 00:15:35.332 4.747 - 4.773: 99.7062% ( 3) 00:15:35.332 4.773 - 4.800: 99.7112% ( 1) 00:15:35.332 4.880 - 4.907: 99.7162% ( 1) 00:15:35.332 4.907 - 4.933: 99.7261% ( 2) 00:15:35.332 4.933 - 4.960: 99.7311% ( 1) 00:15:35.332 5.013 - 5.040: 99.7361% ( 1) 00:15:35.332 5.067 - 5.093: 99.7411% ( 1) 00:15:35.332 5.627 - 5.653: 99.7461% ( 1) 00:15:35.332 5.707 - 5.733: 99.7510% ( 1) 00:15:35.332 5.733 - 5.760: 99.7560% ( 1) 00:15:35.332 5.760 - 5.787: 99.7610% ( 1) 00:15:35.332 5.813 - 5.840: 99.7710% ( 2) 00:15:35.332 5.840 - 5.867: 99.7759% ( 1) 00:15:35.332 5.973 - 6.000: 99.7809% ( 1) 00:15:35.332 6.053 - 6.080: 99.7859% ( 1) 00:15:35.332 6.107 - 6.133: 99.7909% ( 1) 00:15:35.332 6.133 - 6.160: 99.7958% ( 1) 00:15:35.332 6.187 - 6.213: 99.8008% ( 1) 00:15:35.332 6.213 - 6.240: 99.8058% ( 1) 00:15:35.332 6.240 - 6.267: 99.8108% ( 1) 00:15:35.332 6.293 - 6.320: 99.8158% ( 1) 00:15:35.332 6.320 - 6.347: 99.8207% ( 1) 00:15:35.332 6.533 - 6.560: 99.8307% ( 2) 00:15:35.332 6.560 - 6.587: 99.8357% ( 1) 00:15:35.332 6.613 - 6.640: 99.8506% ( 3) 00:15:35.332 6.667 - 6.693: 99.8556% ( 1) 00:15:35.332 6.800 - 6.827: 99.8606% ( 1) 00:15:35.332 [2024-10-16 06:58:34.454390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.332 6.827 - 6.880: 99.8656% ( 1) 00:15:35.332 6.933 - 6.987: 99.8705% ( 1) 00:15:35.332 7.093 - 7.147: 99.8755% ( 1) 00:15:35.332 7.200 - 7.253: 99.8805% ( 1) 00:15:35.332 7.253 - 7.307: 99.8855% ( 1) 00:15:35.332 7.307 - 7.360: 99.8905% ( 1) 00:15:35.332 7.413 - 7.467: 99.8954% ( 1) 00:15:35.332 7.680 - 7.733: 99.9004% ( 1) 00:15:35.332 7.733 - 7.787: 99.9104% ( 2) 00:15:35.332 7.947 - 8.000: 99.9154% ( 1) 00:15:35.332 8.160 - 8.213: 99.9203% ( 1) 00:15:35.332 8.320 - 8.373: 99.9253% ( 1) 00:15:35.332 8.427 - 8.480: 99.9303% ( 1) 00:15:35.332 8.533 - 8.587: 99.9353% ( 1) 00:15:35.332 12.373 - 12.427: 99.9402% ( 1) 00:15:35.332 3986.773 - 4014.080: 100.0000% ( 12) 00:15:35.332 00:15:35.332 Complete histogram 00:15:35.332 ================== 00:15:35.332 Range in us Cumulative Count 00:15:35.332 1.627 - 1.633: 0.0100% ( 2) 00:15:35.332 1.640 - 1.647: 0.3386% ( 66) 00:15:35.332 1.647 - 1.653: 0.6623% ( 65) 00:15:35.332 1.653 - 1.660: 0.7818% ( 24) 00:15:35.332 1.660 - 1.667: 0.8465% ( 13) 00:15:35.332 1.667 - 1.673: 0.9062% ( 12) 00:15:35.332 1.673 - 1.680: 0.9162% ( 2) 00:15:35.332 1.680 - 1.687: 0.9311% ( 3) 00:15:35.332 1.687 - 1.693: 6.6823% ( 1155) 00:15:35.332 1.693 - 1.700: 42.6082% ( 7215) 00:15:35.332 1.700 - 1.707: 50.4208% ( 1569) 00:15:35.332 1.707 - 1.720: 73.5099% ( 4637) 00:15:35.332 1.720 - 1.733: 81.7856% ( 1662) 00:15:35.332 1.733 - 1.747: 83.5732% ( 359) 00:15:35.332 1.747 - 1.760: 86.4363% ( 575) 00:15:35.332 1.760 - 1.773: 90.9974% ( 916) 00:15:35.332 1.773 - 1.787: 95.4788% ( 900) 00:15:35.332 1.787 - 1.800: 98.0630% ( 519) 00:15:35.332 1.800 - 1.813: 99.0639% ( 201) 00:15:35.332 1.813 - 1.827: 99.2979% ( 47) 00:15:35.332 1.827 - 1.840: 99.3776% ( 16) 00:15:35.332 1.840 - 1.853: 99.3925% ( 3) 00:15:35.332 1.853 - 1.867: 99.4025% ( 2) 00:15:35.332 1.920 - 1.933: 99.4324% ( 6) 00:15:35.332 1.947 - 1.960: 99.4373% ( 1) 00:15:35.332 2.000 - 2.013: 99.4423% ( 1) 00:15:35.332 2.053 - 2.067: 99.4473% ( 1) 00:15:35.332 2.093 - 2.107: 99.4523% ( 1) 00:15:35.332 2.160 - 2.173: 99.4622% ( 2) 00:15:35.332 2.213 - 2.227: 99.4672% ( 1) 00:15:35.332 2.227 - 2.240: 99.4772% ( 2) 00:15:35.332 2.413 - 2.427: 99.4821% ( 1) 00:15:35.332 3.387 - 3.400: 99.4871% ( 1) 00:15:35.332 3.520 - 3.547: 99.4921% ( 1) 00:15:35.332 3.573 - 3.600: 99.5021% ( 2) 00:15:35.332 4.240 - 4.267: 99.5070% ( 1) 00:15:35.332 4.320 - 4.347: 99.5170% ( 2) 00:15:35.332 4.480 - 4.507: 99.5270% ( 2) 00:15:35.332 4.533 - 4.560: 99.5319% ( 1) 00:15:35.332 4.560 - 4.587: 99.5369% ( 1) 00:15:35.332 4.747 - 4.773: 99.5419% ( 1) 00:15:35.332 4.773 - 4.800: 99.5469% ( 1) 00:15:35.332 4.907 - 4.933: 99.5568% ( 2) 00:15:35.332 5.067 - 5.093: 99.5668% ( 2) 00:15:35.332 5.173 - 5.200: 99.5718% ( 1) 00:15:35.332 5.360 - 5.387: 99.5768% ( 1) 00:15:35.333 5.493 - 5.520: 99.5817% ( 1) 00:15:35.333 5.520 - 5.547: 99.5867% ( 1) 00:15:35.333 5.707 - 5.733: 99.5917% ( 1) 00:15:35.333 5.787 - 5.813: 99.5967% ( 1) 00:15:35.333 6.160 - 6.187: 99.6017% ( 1) 00:15:35.333 6.213 - 6.240: 99.6066% ( 1) 00:15:35.333 6.240 - 6.267: 99.6116% ( 1) 00:15:35.333 6.400 - 6.427: 99.6166% ( 1) 00:15:35.333 6.427 - 6.453: 99.6216% ( 1) 00:15:35.333 6.773 - 6.800: 99.6265% ( 1) 00:15:35.333 7.147 - 7.200: 99.6315% ( 1) 00:15:35.333 7.253 - 7.307: 99.6365% ( 1) 00:15:35.333 7.520 - 7.573: 99.6415% ( 1) 00:15:35.333 7.733 - 7.787: 99.6465% ( 1) 00:15:35.333 10.027 - 10.080: 99.6514% ( 1) 00:15:35.333 12.000 - 12.053: 99.6564% ( 1) 00:15:35.333 3345.067 - 3358.720: 99.6614% ( 1) 00:15:35.333 3986.773 - 4014.080: 100.0000% ( 68) 00:15:35.333 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.333 [ 00:15:35.333 { 00:15:35.333 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.333 "subtype": "Discovery", 00:15:35.333 "listen_addresses": [], 00:15:35.333 "allow_any_host": true, 00:15:35.333 "hosts": [] 00:15:35.333 }, 00:15:35.333 { 00:15:35.333 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.333 "subtype": "NVMe", 00:15:35.333 "listen_addresses": [ 00:15:35.333 { 00:15:35.333 "trtype": "VFIOUSER", 00:15:35.333 "adrfam": "IPv4", 00:15:35.333 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.333 "trsvcid": "0" 00:15:35.333 } 00:15:35.333 ], 00:15:35.333 "allow_any_host": true, 00:15:35.333 "hosts": [], 00:15:35.333 "serial_number": "SPDK1", 00:15:35.333 "model_number": "SPDK bdev Controller", 00:15:35.333 "max_namespaces": 32, 00:15:35.333 "min_cntlid": 1, 00:15:35.333 "max_cntlid": 65519, 00:15:35.333 "namespaces": [ 00:15:35.333 { 00:15:35.333 "nsid": 1, 00:15:35.333 "bdev_name": "Malloc1", 00:15:35.333 "name": "Malloc1", 00:15:35.333 "nguid": "F69CA98C3F2B43BCB6843273E4746423", 00:15:35.333 "uuid": "f69ca98c-3f2b-43bc-b684-3273e4746423" 00:15:35.333 }, 00:15:35.333 { 00:15:35.333 "nsid": 2, 00:15:35.333 "bdev_name": "Malloc3", 00:15:35.333 "name": "Malloc3", 00:15:35.333 "nguid": "2D626C76FCE64098AB8FD3312AD8BA50", 00:15:35.333 "uuid": "2d626c76-fce6-4098-ab8f-d3312ad8ba50" 00:15:35.333 } 00:15:35.333 ] 00:15:35.333 }, 00:15:35.333 { 00:15:35.333 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.333 "subtype": "NVMe", 00:15:35.333 "listen_addresses": [ 00:15:35.333 { 00:15:35.333 "trtype": "VFIOUSER", 00:15:35.333 "adrfam": "IPv4", 00:15:35.333 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.333 "trsvcid": "0" 00:15:35.333 } 00:15:35.333 ], 00:15:35.333 "allow_any_host": true, 00:15:35.333 "hosts": [], 00:15:35.333 "serial_number": "SPDK2", 00:15:35.333 "model_number": "SPDK bdev Controller", 00:15:35.333 "max_namespaces": 32, 00:15:35.333 "min_cntlid": 1, 00:15:35.333 "max_cntlid": 65519, 00:15:35.333 "namespaces": [ 00:15:35.333 { 00:15:35.333 "nsid": 1, 00:15:35.333 "bdev_name": "Malloc2", 00:15:35.333 "name": "Malloc2", 00:15:35.333 "nguid": "89E2AEB613B241DAB30D3305FBF99442", 00:15:35.333 "uuid": "89e2aeb6-13b2-41da-b30d-3305fbf99442" 00:15:35.333 } 00:15:35.333 ] 00:15:35.333 } 00:15:35.333 ] 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3087101 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:35.333 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:35.333 [2024-10-16 06:58:34.828316] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.594 Malloc4 00:15:35.594 06:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:35.594 [2024-10-16 06:58:35.029720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.594 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.594 Asynchronous Event Request test 00:15:35.594 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.594 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.594 Registering asynchronous event callbacks... 00:15:35.594 Starting namespace attribute notice tests for all controllers... 00:15:35.594 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:35.594 aer_cb - Changed Namespace 00:15:35.594 Cleaning up... 00:15:35.855 [ 00:15:35.855 { 00:15:35.855 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.855 "subtype": "Discovery", 00:15:35.855 "listen_addresses": [], 00:15:35.855 "allow_any_host": true, 00:15:35.855 "hosts": [] 00:15:35.855 }, 00:15:35.855 { 00:15:35.855 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.855 "subtype": "NVMe", 00:15:35.855 "listen_addresses": [ 00:15:35.855 { 00:15:35.855 "trtype": "VFIOUSER", 00:15:35.855 "adrfam": "IPv4", 00:15:35.855 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.855 "trsvcid": "0" 00:15:35.855 } 00:15:35.855 ], 00:15:35.855 "allow_any_host": true, 00:15:35.855 "hosts": [], 00:15:35.855 "serial_number": "SPDK1", 00:15:35.855 "model_number": "SPDK bdev Controller", 00:15:35.855 "max_namespaces": 32, 00:15:35.855 "min_cntlid": 1, 00:15:35.855 "max_cntlid": 65519, 00:15:35.855 "namespaces": [ 00:15:35.855 { 00:15:35.855 "nsid": 1, 00:15:35.855 "bdev_name": "Malloc1", 00:15:35.855 "name": "Malloc1", 00:15:35.855 "nguid": "F69CA98C3F2B43BCB6843273E4746423", 00:15:35.855 "uuid": "f69ca98c-3f2b-43bc-b684-3273e4746423" 00:15:35.855 }, 00:15:35.855 { 00:15:35.855 "nsid": 2, 00:15:35.855 "bdev_name": "Malloc3", 00:15:35.855 "name": "Malloc3", 00:15:35.855 "nguid": "2D626C76FCE64098AB8FD3312AD8BA50", 00:15:35.855 "uuid": "2d626c76-fce6-4098-ab8f-d3312ad8ba50" 00:15:35.855 } 00:15:35.855 ] 00:15:35.855 }, 00:15:35.855 { 00:15:35.855 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.855 "subtype": "NVMe", 00:15:35.855 "listen_addresses": [ 00:15:35.855 { 00:15:35.855 "trtype": "VFIOUSER", 00:15:35.855 "adrfam": "IPv4", 00:15:35.855 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.855 "trsvcid": "0" 00:15:35.855 } 00:15:35.855 ], 00:15:35.855 "allow_any_host": true, 00:15:35.855 "hosts": [], 00:15:35.855 "serial_number": "SPDK2", 00:15:35.855 "model_number": "SPDK bdev Controller", 00:15:35.855 "max_namespaces": 32, 00:15:35.855 "min_cntlid": 1, 00:15:35.855 "max_cntlid": 65519, 00:15:35.855 "namespaces": [ 00:15:35.855 { 00:15:35.855 "nsid": 1, 00:15:35.855 "bdev_name": "Malloc2", 00:15:35.855 "name": "Malloc2", 00:15:35.855 "nguid": "89E2AEB613B241DAB30D3305FBF99442", 00:15:35.855 "uuid": "89e2aeb6-13b2-41da-b30d-3305fbf99442" 00:15:35.855 }, 00:15:35.855 { 00:15:35.855 "nsid": 2, 00:15:35.855 "bdev_name": "Malloc4", 00:15:35.855 "name": "Malloc4", 00:15:35.855 "nguid": "5324844B866B4C6482F93419A6FAA9F5", 00:15:35.855 "uuid": "5324844b-866b-4c64-82f9-3419a6faa9f5" 00:15:35.855 } 00:15:35.855 ] 00:15:35.855 } 00:15:35.855 ] 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3087101 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3078170 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3078170 ']' 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3078170 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3078170 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3078170' 00:15:35.855 killing process with pid 3078170 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3078170 00:15:35.855 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3078170 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3087289 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3087289' 00:15:36.117 Process pid: 3087289 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3087289 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3087289 ']' 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.117 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:36.117 [2024-10-16 06:58:35.495556] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:36.117 [2024-10-16 06:58:35.496247] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:15:36.117 [2024-10-16 06:58:35.496286] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.117 [2024-10-16 06:58:35.565274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.117 [2024-10-16 06:58:35.594916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.117 [2024-10-16 06:58:35.594944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.117 [2024-10-16 06:58:35.594949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.117 [2024-10-16 06:58:35.594954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.117 [2024-10-16 06:58:35.594959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.117 [2024-10-16 06:58:35.596366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.117 [2024-10-16 06:58:35.596709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.117 [2024-10-16 06:58:35.596820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.117 [2024-10-16 06:58:35.596822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.377 [2024-10-16 06:58:35.647179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:36.377 [2024-10-16 06:58:35.648215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:36.377 [2024-10-16 06:58:35.649084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:36.377 [2024-10-16 06:58:35.650022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:36.377 [2024-10-16 06:58:35.650033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:36.377 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.377 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:36.377 06:58:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:37.320 06:58:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:37.580 06:58:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:37.580 06:58:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:37.580 06:58:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:37.580 06:58:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:37.580 06:58:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:37.842 Malloc1 00:15:37.842 06:58:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:37.842 06:58:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:38.103 06:58:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:38.363 06:58:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.363 06:58:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:38.363 06:58:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:38.624 Malloc2 00:15:38.624 06:58:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:38.624 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:38.885 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3087289 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3087289 ']' 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3087289 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3087289 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3087289' 00:15:39.146 killing process with pid 3087289 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3087289 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3087289 00:15:39.146 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:39.408 00:15:39.408 real 0m50.222s 00:15:39.408 user 3m14.520s 00:15:39.408 sys 0m2.673s 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:39.408 ************************************ 00:15:39.408 END TEST nvmf_vfio_user 00:15:39.408 ************************************ 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.408 ************************************ 00:15:39.408 START TEST nvmf_vfio_user_nvme_compliance 00:15:39.408 ************************************ 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:39.408 * Looking for test storage... 00:15:39.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:39.408 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.671 --rc genhtml_branch_coverage=1 00:15:39.671 --rc genhtml_function_coverage=1 00:15:39.671 --rc genhtml_legend=1 00:15:39.671 --rc geninfo_all_blocks=1 00:15:39.671 --rc geninfo_unexecuted_blocks=1 00:15:39.671 00:15:39.671 ' 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.671 --rc genhtml_branch_coverage=1 00:15:39.671 --rc genhtml_function_coverage=1 00:15:39.671 --rc genhtml_legend=1 00:15:39.671 --rc geninfo_all_blocks=1 00:15:39.671 --rc geninfo_unexecuted_blocks=1 00:15:39.671 00:15:39.671 ' 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.671 --rc genhtml_branch_coverage=1 00:15:39.671 --rc genhtml_function_coverage=1 00:15:39.671 --rc genhtml_legend=1 00:15:39.671 --rc geninfo_all_blocks=1 00:15:39.671 --rc geninfo_unexecuted_blocks=1 00:15:39.671 00:15:39.671 ' 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:39.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.671 --rc genhtml_branch_coverage=1 00:15:39.671 --rc genhtml_function_coverage=1 00:15:39.671 --rc genhtml_legend=1 00:15:39.671 --rc geninfo_all_blocks=1 00:15:39.671 --rc geninfo_unexecuted_blocks=1 00:15:39.671 00:15:39.671 ' 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.671 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3088039 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3088039' 00:15:39.672 Process pid: 3088039 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3088039 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3088039 ']' 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.672 06:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.672 [2024-10-16 06:58:39.023760] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:15:39.672 [2024-10-16 06:58:39.023833] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.672 [2024-10-16 06:58:39.080027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.672 [2024-10-16 06:58:39.111565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.672 [2024-10-16 06:58:39.111597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.672 [2024-10-16 06:58:39.111603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.672 [2024-10-16 06:58:39.111607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.672 [2024-10-16 06:58:39.111612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.672 [2024-10-16 06:58:39.112767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.672 [2024-10-16 06:58:39.112899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.672 [2024-10-16 06:58:39.112901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.933 06:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.933 06:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:39.933 06:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.874 malloc0 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.874 06:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:41.135 00:15:41.135 00:15:41.135 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.135 http://cunit.sourceforge.net/ 00:15:41.135 00:15:41.135 00:15:41.135 Suite: nvme_compliance 00:15:41.135 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-16 06:58:40.442246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.135 [2024-10-16 06:58:40.443556] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:41.135 [2024-10-16 06:58:40.443568] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:41.135 [2024-10-16 06:58:40.443573] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:41.135 [2024-10-16 06:58:40.445269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.135 passed 00:15:41.135 Test: admin_identify_ctrlr_verify_fused ...[2024-10-16 06:58:40.522771] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.135 [2024-10-16 06:58:40.527803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.135 passed 00:15:41.135 Test: admin_identify_ns ...[2024-10-16 06:58:40.604203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.396 [2024-10-16 06:58:40.667851] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:41.396 [2024-10-16 06:58:40.675852] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:41.396 [2024-10-16 06:58:40.696935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.396 passed 00:15:41.396 Test: admin_get_features_mandatory_features ...[2024-10-16 06:58:40.769181] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.396 [2024-10-16 06:58:40.773208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.396 passed 00:15:41.396 Test: admin_get_features_optional_features ...[2024-10-16 06:58:40.849682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.396 [2024-10-16 06:58:40.852694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.396 passed 00:15:41.657 Test: admin_set_features_number_of_queues ...[2024-10-16 06:58:40.927208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.657 [2024-10-16 06:58:41.031931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.657 passed 00:15:41.657 Test: admin_get_log_page_mandatory_logs ...[2024-10-16 06:58:41.107959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.657 [2024-10-16 06:58:41.110980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.657 passed 00:15:41.918 Test: admin_get_log_page_with_lpo ...[2024-10-16 06:58:41.185757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.918 [2024-10-16 06:58:41.252851] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:41.918 [2024-10-16 06:58:41.265909] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.918 passed 00:15:41.918 Test: fabric_property_get ...[2024-10-16 06:58:41.340963] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.918 [2024-10-16 06:58:41.342164] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:41.918 [2024-10-16 06:58:41.343983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.918 passed 00:15:42.179 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-16 06:58:41.421456] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.179 [2024-10-16 06:58:41.422653] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:42.179 [2024-10-16 06:58:41.424476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.179 passed 00:15:42.179 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-16 06:58:41.499219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.179 [2024-10-16 06:58:41.583850] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:42.179 [2024-10-16 06:58:41.599848] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:42.179 [2024-10-16 06:58:41.604918] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.179 passed 00:15:42.179 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-16 06:58:41.678122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.179 [2024-10-16 06:58:41.679319] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:42.440 [2024-10-16 06:58:41.681138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.440 passed 00:15:42.440 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-16 06:58:41.756191] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.440 [2024-10-16 06:58:41.835852] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:42.440 [2024-10-16 06:58:41.859852] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:42.440 [2024-10-16 06:58:41.864926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.440 passed 00:15:42.440 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-16 06:58:41.937115] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.440 [2024-10-16 06:58:41.938312] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:42.440 [2024-10-16 06:58:41.938329] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:42.701 [2024-10-16 06:58:41.941132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.701 passed 00:15:42.701 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-16 06:58:42.015849] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.701 [2024-10-16 06:58:42.109851] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:42.701 [2024-10-16 06:58:42.117854] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:42.701 [2024-10-16 06:58:42.125867] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:42.701 [2024-10-16 06:58:42.133852] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:42.701 [2024-10-16 06:58:42.162920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.701 passed 00:15:42.962 Test: admin_create_io_sq_verify_pc ...[2024-10-16 06:58:42.235139] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.962 [2024-10-16 06:58:42.251857] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:42.962 [2024-10-16 06:58:42.269291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.962 passed 00:15:42.962 Test: admin_create_io_qp_max_qps ...[2024-10-16 06:58:42.347748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.347 [2024-10-16 06:58:43.443852] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:44.347 [2024-10-16 06:58:43.821611] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.347 passed 00:15:44.608 Test: admin_create_io_sq_shared_cq ...[2024-10-16 06:58:43.896407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.608 [2024-10-16 06:58:44.028850] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:44.608 [2024-10-16 06:58:44.065897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.608 passed 00:15:44.608 00:15:44.608 Run Summary: Type Total Ran Passed Failed Inactive 00:15:44.608 suites 1 1 n/a 0 0 00:15:44.608 tests 18 18 18 0 0 00:15:44.608 asserts 360 360 360 0 n/a 00:15:44.608 00:15:44.608 Elapsed time = 1.486 seconds 00:15:44.608 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3088039 00:15:44.608 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3088039 ']' 00:15:44.608 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3088039 00:15:44.868 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3088039 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3088039' 00:15:44.869 killing process with pid 3088039 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3088039 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3088039 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:44.869 00:15:44.869 real 0m5.564s 00:15:44.869 user 0m15.660s 00:15:44.869 sys 0m0.497s 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.869 ************************************ 00:15:44.869 END TEST nvmf_vfio_user_nvme_compliance 00:15:44.869 ************************************ 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.869 ************************************ 00:15:44.869 START TEST nvmf_vfio_user_fuzz 00:15:44.869 ************************************ 00:15:44.869 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:45.130 * Looking for test storage... 00:15:45.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:45.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.130 --rc genhtml_branch_coverage=1 00:15:45.130 --rc genhtml_function_coverage=1 00:15:45.130 --rc genhtml_legend=1 00:15:45.130 --rc geninfo_all_blocks=1 00:15:45.130 --rc geninfo_unexecuted_blocks=1 00:15:45.130 00:15:45.130 ' 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:45.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.130 --rc genhtml_branch_coverage=1 00:15:45.130 --rc genhtml_function_coverage=1 00:15:45.130 --rc genhtml_legend=1 00:15:45.130 --rc geninfo_all_blocks=1 00:15:45.130 --rc geninfo_unexecuted_blocks=1 00:15:45.130 00:15:45.130 ' 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:45.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.130 --rc genhtml_branch_coverage=1 00:15:45.130 --rc genhtml_function_coverage=1 00:15:45.130 --rc genhtml_legend=1 00:15:45.130 --rc geninfo_all_blocks=1 00:15:45.130 --rc geninfo_unexecuted_blocks=1 00:15:45.130 00:15:45.130 ' 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:45.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.130 --rc genhtml_branch_coverage=1 00:15:45.130 --rc genhtml_function_coverage=1 00:15:45.130 --rc genhtml_legend=1 00:15:45.130 --rc geninfo_all_blocks=1 00:15:45.130 --rc geninfo_unexecuted_blocks=1 00:15:45.130 00:15:45.130 ' 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.130 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3089262 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3089262' 00:15:45.131 Process pid: 3089262 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3089262 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3089262 ']' 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:45.131 06:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.072 06:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.072 06:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:46.072 06:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.056 malloc0 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:47.056 06:58:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:19.205 Fuzzing completed. Shutting down the fuzz application 00:16:19.205 00:16:19.205 Dumping successful admin opcodes: 00:16:19.205 8, 9, 10, 24, 00:16:19.205 Dumping successful io opcodes: 00:16:19.205 0, 00:16:19.205 NS: 0x20000081ef00 I/O qp, Total commands completed: 1414441, total successful commands: 5558, random_seed: 879960704 00:16:19.205 NS: 0x20000081ef00 admin qp, Total commands completed: 351054, total successful commands: 2831, random_seed: 3535993984 00:16:19.205 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3089262 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3089262 ']' 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3089262 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3089262 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3089262' 00:16:19.206 killing process with pid 3089262 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3089262 00:16:19.206 06:59:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3089262 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:19.206 00:16:19.206 real 0m32.797s 00:16:19.206 user 0m37.928s 00:16:19.206 sys 0m24.585s 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 ************************************ 00:16:19.206 END TEST nvmf_vfio_user_fuzz 00:16:19.206 ************************************ 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 ************************************ 00:16:19.206 START TEST nvmf_auth_target 00:16:19.206 ************************************ 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:19.206 * Looking for test storage... 00:16:19.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:19.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.206 --rc genhtml_branch_coverage=1 00:16:19.206 --rc genhtml_function_coverage=1 00:16:19.206 --rc genhtml_legend=1 00:16:19.206 --rc geninfo_all_blocks=1 00:16:19.206 --rc geninfo_unexecuted_blocks=1 00:16:19.206 00:16:19.206 ' 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:19.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.206 --rc genhtml_branch_coverage=1 00:16:19.206 --rc genhtml_function_coverage=1 00:16:19.206 --rc genhtml_legend=1 00:16:19.206 --rc geninfo_all_blocks=1 00:16:19.206 --rc geninfo_unexecuted_blocks=1 00:16:19.206 00:16:19.206 ' 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:19.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.206 --rc genhtml_branch_coverage=1 00:16:19.206 --rc genhtml_function_coverage=1 00:16:19.206 --rc genhtml_legend=1 00:16:19.206 --rc geninfo_all_blocks=1 00:16:19.206 --rc geninfo_unexecuted_blocks=1 00:16:19.206 00:16:19.206 ' 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:19.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.206 --rc genhtml_branch_coverage=1 00:16:19.206 --rc genhtml_function_coverage=1 00:16:19.206 --rc genhtml_legend=1 00:16:19.206 --rc geninfo_all_blocks=1 00:16:19.206 --rc geninfo_unexecuted_blocks=1 00:16:19.206 00:16:19.206 ' 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.206 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:19.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:19.207 06:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.805 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.805 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:25.806 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:25.806 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:25.806 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:25.806 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:25.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:16:25.806 00:16:25.806 --- 10.0.0.2 ping statistics --- 00:16:25.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.806 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:16:25.806 00:16:25.806 --- 10.0.0.1 ping statistics --- 00:16:25.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.806 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:25.806 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3099244 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3099244 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3099244 ']' 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.807 06:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.380 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.380 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:26.380 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:26.380 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:26.380 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3099357 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9de20e4bd42037cca88261eaa1b2bef8213d022f30eb7e01 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.cw9 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9de20e4bd42037cca88261eaa1b2bef8213d022f30eb7e01 0 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9de20e4bd42037cca88261eaa1b2bef8213d022f30eb7e01 0 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9de20e4bd42037cca88261eaa1b2bef8213d022f30eb7e01 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.cw9 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.cw9 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.cw9 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=1599173de18da0f37d0ae963e67be7297db1ff6733a3365915343268a62d3da1 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.0Wg 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 1599173de18da0f37d0ae963e67be7297db1ff6733a3365915343268a62d3da1 3 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 1599173de18da0f37d0ae963e67be7297db1ff6733a3365915343268a62d3da1 3 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=1599173de18da0f37d0ae963e67be7297db1ff6733a3365915343268a62d3da1 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:26.641 06:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.0Wg 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.0Wg 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0Wg 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d7a929addf53d29fcd8918fb695ff852 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.UWm 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d7a929addf53d29fcd8918fb695ff852 1 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d7a929addf53d29fcd8918fb695ff852 1 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:26.641 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d7a929addf53d29fcd8918fb695ff852 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.UWm 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.UWm 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.UWm 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=6778e83e4e9553f3dcf6d4cebb5bd259b6f31737eba0224a 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.7eZ 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 6778e83e4e9553f3dcf6d4cebb5bd259b6f31737eba0224a 2 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 6778e83e4e9553f3dcf6d4cebb5bd259b6f31737eba0224a 2 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=6778e83e4e9553f3dcf6d4cebb5bd259b6f31737eba0224a 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:26.642 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.7eZ 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.7eZ 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.7eZ 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b70cfa80c18d2f29b347237a50737b9940cc585bc85f59f0 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.9X0 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b70cfa80c18d2f29b347237a50737b9940cc585bc85f59f0 2 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b70cfa80c18d2f29b347237a50737b9940cc585bc85f59f0 2 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b70cfa80c18d2f29b347237a50737b9940cc585bc85f59f0 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.9X0 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.9X0 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.9X0 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4c21e5aa21f1ba8ed031433b1fe9f36a 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.nwo 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4c21e5aa21f1ba8ed031433b1fe9f36a 1 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4c21e5aa21f1ba8ed031433b1fe9f36a 1 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4c21e5aa21f1ba8ed031433b1fe9f36a 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.nwo 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.nwo 00:16:26.904 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.nwo 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=7d1cc918b6b8d747aad2ff75586ba6f7f2a766952e278af2af8ce68b84545ebc 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.1UP 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 7d1cc918b6b8d747aad2ff75586ba6f7f2a766952e278af2af8ce68b84545ebc 3 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 7d1cc918b6b8d747aad2ff75586ba6f7f2a766952e278af2af8ce68b84545ebc 3 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=7d1cc918b6b8d747aad2ff75586ba6f7f2a766952e278af2af8ce68b84545ebc 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.1UP 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.1UP 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.1UP 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3099244 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3099244 ']' 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:26.905 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.167 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.167 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:27.167 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3099357 /var/tmp/host.sock 00:16:27.167 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3099357 ']' 00:16:27.167 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:27.167 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.167 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:27.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:27.167 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.167 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cw9 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cw9 00:16:27.428 06:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cw9 00:16:27.690 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0Wg ]] 00:16:27.690 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0Wg 00:16:27.690 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.690 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.690 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.690 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0Wg 00:16:27.690 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0Wg 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UWm 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.UWm 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.UWm 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.7eZ ]] 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7eZ 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7eZ 00:16:27.955 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7eZ 00:16:28.217 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:28.217 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9X0 00:16:28.217 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.217 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.217 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.217 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.9X0 00:16:28.217 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.9X0 00:16:28.479 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.nwo ]] 00:16:28.479 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nwo 00:16:28.479 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.479 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.479 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.479 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nwo 00:16:28.479 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nwo 00:16:28.741 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:28.741 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1UP 00:16:28.741 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.741 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.741 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.741 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1UP 00:16:28.741 06:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1UP 00:16:28.741 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:28.741 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:28.741 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.741 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.741 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.741 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.003 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:29.003 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.003 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.004 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.265 00:16:29.265 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.265 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.265 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.526 { 00:16:29.526 "cntlid": 1, 00:16:29.526 "qid": 0, 00:16:29.526 "state": "enabled", 00:16:29.526 "thread": "nvmf_tgt_poll_group_000", 00:16:29.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:29.526 "listen_address": { 00:16:29.526 "trtype": "TCP", 00:16:29.526 "adrfam": "IPv4", 00:16:29.526 "traddr": "10.0.0.2", 00:16:29.526 "trsvcid": "4420" 00:16:29.526 }, 00:16:29.526 "peer_address": { 00:16:29.526 "trtype": "TCP", 00:16:29.526 "adrfam": "IPv4", 00:16:29.526 "traddr": "10.0.0.1", 00:16:29.526 "trsvcid": "50814" 00:16:29.526 }, 00:16:29.526 "auth": { 00:16:29.526 "state": "completed", 00:16:29.526 "digest": "sha256", 00:16:29.526 "dhgroup": "null" 00:16:29.526 } 00:16:29.526 } 00:16:29.526 ]' 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.526 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:29.527 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.527 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.527 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.527 06:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.788 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:29.788 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:30.360 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.360 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:30.360 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.360 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.360 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.620 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.620 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.620 06:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.620 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.880 00:16:30.880 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.880 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.880 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.141 { 00:16:31.141 "cntlid": 3, 00:16:31.141 "qid": 0, 00:16:31.141 "state": "enabled", 00:16:31.141 "thread": "nvmf_tgt_poll_group_000", 00:16:31.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:31.141 "listen_address": { 00:16:31.141 "trtype": "TCP", 00:16:31.141 "adrfam": "IPv4", 00:16:31.141 "traddr": "10.0.0.2", 00:16:31.141 "trsvcid": "4420" 00:16:31.141 }, 00:16:31.141 "peer_address": { 00:16:31.141 "trtype": "TCP", 00:16:31.141 "adrfam": "IPv4", 00:16:31.141 "traddr": "10.0.0.1", 00:16:31.141 "trsvcid": "50832" 00:16:31.141 }, 00:16:31.141 "auth": { 00:16:31.141 "state": "completed", 00:16:31.141 "digest": "sha256", 00:16:31.141 "dhgroup": "null" 00:16:31.141 } 00:16:31.141 } 00:16:31.141 ]' 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.141 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.402 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:31.402 06:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:31.975 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.235 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.236 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.496 00:16:32.496 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.496 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.496 06:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.756 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.756 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.756 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.756 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.756 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.756 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.756 { 00:16:32.756 "cntlid": 5, 00:16:32.756 "qid": 0, 00:16:32.756 "state": "enabled", 00:16:32.756 "thread": "nvmf_tgt_poll_group_000", 00:16:32.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:32.756 "listen_address": { 00:16:32.757 "trtype": "TCP", 00:16:32.757 "adrfam": "IPv4", 00:16:32.757 "traddr": "10.0.0.2", 00:16:32.757 "trsvcid": "4420" 00:16:32.757 }, 00:16:32.757 "peer_address": { 00:16:32.757 "trtype": "TCP", 00:16:32.757 "adrfam": "IPv4", 00:16:32.757 "traddr": "10.0.0.1", 00:16:32.757 "trsvcid": "57306" 00:16:32.757 }, 00:16:32.757 "auth": { 00:16:32.757 "state": "completed", 00:16:32.757 "digest": "sha256", 00:16:32.757 "dhgroup": "null" 00:16:32.757 } 00:16:32.757 } 00:16:32.757 ]' 00:16:32.757 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.757 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.757 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.757 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.757 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.018 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.018 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.018 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.018 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:16:33.018 06:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:33.956 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.957 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.216 00:16:34.216 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.216 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.216 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.477 { 00:16:34.477 "cntlid": 7, 00:16:34.477 "qid": 0, 00:16:34.477 "state": "enabled", 00:16:34.477 "thread": "nvmf_tgt_poll_group_000", 00:16:34.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:34.477 "listen_address": { 00:16:34.477 "trtype": "TCP", 00:16:34.477 "adrfam": "IPv4", 00:16:34.477 "traddr": "10.0.0.2", 00:16:34.477 "trsvcid": "4420" 00:16:34.477 }, 00:16:34.477 "peer_address": { 00:16:34.477 "trtype": "TCP", 00:16:34.477 "adrfam": "IPv4", 00:16:34.477 "traddr": "10.0.0.1", 00:16:34.477 "trsvcid": "57350" 00:16:34.477 }, 00:16:34.477 "auth": { 00:16:34.477 "state": "completed", 00:16:34.477 "digest": "sha256", 00:16:34.477 "dhgroup": "null" 00:16:34.477 } 00:16:34.477 } 00:16:34.477 ]' 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.477 06:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.738 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:16:34.738 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:16:35.308 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.308 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.308 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.308 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.308 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.308 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.308 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.308 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.308 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.569 06:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.830 00:16:35.830 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.830 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.830 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.830 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.830 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.830 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.830 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.090 { 00:16:36.090 "cntlid": 9, 00:16:36.090 "qid": 0, 00:16:36.090 "state": "enabled", 00:16:36.090 "thread": "nvmf_tgt_poll_group_000", 00:16:36.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.090 "listen_address": { 00:16:36.090 "trtype": "TCP", 00:16:36.090 "adrfam": "IPv4", 00:16:36.090 "traddr": "10.0.0.2", 00:16:36.090 "trsvcid": "4420" 00:16:36.090 }, 00:16:36.090 "peer_address": { 00:16:36.090 "trtype": "TCP", 00:16:36.090 "adrfam": "IPv4", 00:16:36.090 "traddr": "10.0.0.1", 00:16:36.090 "trsvcid": "57362" 00:16:36.090 }, 00:16:36.090 "auth": { 00:16:36.090 "state": "completed", 00:16:36.090 "digest": "sha256", 00:16:36.090 "dhgroup": "ffdhe2048" 00:16:36.090 } 00:16:36.090 } 00:16:36.090 ]' 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.090 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.351 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:36.351 06:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:36.922 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.922 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.922 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.922 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.922 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.922 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.922 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.922 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.183 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.445 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.445 { 00:16:37.445 "cntlid": 11, 00:16:37.445 "qid": 0, 00:16:37.445 "state": "enabled", 00:16:37.445 "thread": "nvmf_tgt_poll_group_000", 00:16:37.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:37.445 "listen_address": { 00:16:37.445 "trtype": "TCP", 00:16:37.445 "adrfam": "IPv4", 00:16:37.445 "traddr": "10.0.0.2", 00:16:37.445 "trsvcid": "4420" 00:16:37.445 }, 00:16:37.445 "peer_address": { 00:16:37.445 "trtype": "TCP", 00:16:37.445 "adrfam": "IPv4", 00:16:37.445 "traddr": "10.0.0.1", 00:16:37.445 "trsvcid": "57394" 00:16:37.445 }, 00:16:37.445 "auth": { 00:16:37.445 "state": "completed", 00:16:37.445 "digest": "sha256", 00:16:37.445 "dhgroup": "ffdhe2048" 00:16:37.445 } 00:16:37.445 } 00:16:37.445 ]' 00:16:37.445 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.706 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.706 06:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.706 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.706 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.706 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.706 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.706 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.966 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:37.966 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:38.536 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.536 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.536 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.536 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.536 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.536 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.536 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.536 06:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.797 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.058 00:16:39.058 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.058 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.058 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.058 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.058 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.058 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.058 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.058 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.058 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.058 { 00:16:39.058 "cntlid": 13, 00:16:39.058 "qid": 0, 00:16:39.058 "state": "enabled", 00:16:39.058 "thread": "nvmf_tgt_poll_group_000", 00:16:39.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.058 "listen_address": { 00:16:39.058 "trtype": "TCP", 00:16:39.058 "adrfam": "IPv4", 00:16:39.058 "traddr": "10.0.0.2", 00:16:39.058 "trsvcid": "4420" 00:16:39.058 }, 00:16:39.058 "peer_address": { 00:16:39.058 "trtype": "TCP", 00:16:39.058 "adrfam": "IPv4", 00:16:39.058 "traddr": "10.0.0.1", 00:16:39.058 "trsvcid": "57418" 00:16:39.058 }, 00:16:39.058 "auth": { 00:16:39.058 "state": "completed", 00:16:39.058 "digest": "sha256", 00:16:39.058 "dhgroup": "ffdhe2048" 00:16:39.058 } 00:16:39.058 } 00:16:39.059 ]' 00:16:39.059 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.319 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.319 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.319 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.319 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.319 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.319 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.319 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.580 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:16:39.580 06:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:16:40.152 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.152 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.152 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.152 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.152 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.152 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.152 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.152 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.413 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.674 00:16:40.674 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.674 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.674 06:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.674 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.674 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.674 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.674 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.674 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.674 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.674 { 00:16:40.674 "cntlid": 15, 00:16:40.674 "qid": 0, 00:16:40.674 "state": "enabled", 00:16:40.674 "thread": "nvmf_tgt_poll_group_000", 00:16:40.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.674 "listen_address": { 00:16:40.674 "trtype": "TCP", 00:16:40.674 "adrfam": "IPv4", 00:16:40.674 "traddr": "10.0.0.2", 00:16:40.674 "trsvcid": "4420" 00:16:40.674 }, 00:16:40.674 "peer_address": { 00:16:40.674 "trtype": "TCP", 00:16:40.674 "adrfam": "IPv4", 00:16:40.674 "traddr": "10.0.0.1", 00:16:40.674 "trsvcid": "57458" 00:16:40.674 }, 00:16:40.674 "auth": { 00:16:40.674 "state": "completed", 00:16:40.674 "digest": "sha256", 00:16:40.674 "dhgroup": "ffdhe2048" 00:16:40.674 } 00:16:40.674 } 00:16:40.674 ]' 00:16:40.674 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.934 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.934 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.934 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.934 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.934 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.934 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.934 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.195 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:16:41.195 06:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:16:41.766 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.766 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.766 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.766 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.766 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.766 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.766 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.766 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.766 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.027 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.288 00:16:42.288 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.288 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.288 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.288 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.288 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.288 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.288 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.288 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.548 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.548 { 00:16:42.548 "cntlid": 17, 00:16:42.548 "qid": 0, 00:16:42.548 "state": "enabled", 00:16:42.548 "thread": "nvmf_tgt_poll_group_000", 00:16:42.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.548 "listen_address": { 00:16:42.548 "trtype": "TCP", 00:16:42.548 "adrfam": "IPv4", 00:16:42.548 "traddr": "10.0.0.2", 00:16:42.548 "trsvcid": "4420" 00:16:42.548 }, 00:16:42.548 "peer_address": { 00:16:42.548 "trtype": "TCP", 00:16:42.548 "adrfam": "IPv4", 00:16:42.548 "traddr": "10.0.0.1", 00:16:42.548 "trsvcid": "51354" 00:16:42.548 }, 00:16:42.548 "auth": { 00:16:42.548 "state": "completed", 00:16:42.548 "digest": "sha256", 00:16:42.548 "dhgroup": "ffdhe3072" 00:16:42.548 } 00:16:42.548 } 00:16:42.548 ]' 00:16:42.548 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.548 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.548 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.548 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.548 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.548 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.548 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.548 06:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.808 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:42.808 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:43.379 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.379 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.379 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.379 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.379 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.379 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.379 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.379 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.640 06:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.900 00:16:43.900 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.900 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.900 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.900 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.900 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.900 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.900 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.161 { 00:16:44.161 "cntlid": 19, 00:16:44.161 "qid": 0, 00:16:44.161 "state": "enabled", 00:16:44.161 "thread": "nvmf_tgt_poll_group_000", 00:16:44.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.161 "listen_address": { 00:16:44.161 "trtype": "TCP", 00:16:44.161 "adrfam": "IPv4", 00:16:44.161 "traddr": "10.0.0.2", 00:16:44.161 "trsvcid": "4420" 00:16:44.161 }, 00:16:44.161 "peer_address": { 00:16:44.161 "trtype": "TCP", 00:16:44.161 "adrfam": "IPv4", 00:16:44.161 "traddr": "10.0.0.1", 00:16:44.161 "trsvcid": "51368" 00:16:44.161 }, 00:16:44.161 "auth": { 00:16:44.161 "state": "completed", 00:16:44.161 "digest": "sha256", 00:16:44.161 "dhgroup": "ffdhe3072" 00:16:44.161 } 00:16:44.161 } 00:16:44.161 ]' 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.161 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.422 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:44.423 06:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:44.994 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.994 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.994 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.994 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.994 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.994 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.994 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.994 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.254 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:45.254 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.254 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.254 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.255 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.255 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.255 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.255 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.255 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.255 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.255 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.255 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.255 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.515 00:16:45.515 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.515 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.515 06:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.776 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.776 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.776 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.776 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.776 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.776 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.776 { 00:16:45.776 "cntlid": 21, 00:16:45.776 "qid": 0, 00:16:45.776 "state": "enabled", 00:16:45.777 "thread": "nvmf_tgt_poll_group_000", 00:16:45.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.777 "listen_address": { 00:16:45.777 "trtype": "TCP", 00:16:45.777 "adrfam": "IPv4", 00:16:45.777 "traddr": "10.0.0.2", 00:16:45.777 "trsvcid": "4420" 00:16:45.777 }, 00:16:45.777 "peer_address": { 00:16:45.777 "trtype": "TCP", 00:16:45.777 "adrfam": "IPv4", 00:16:45.777 "traddr": "10.0.0.1", 00:16:45.777 "trsvcid": "51396" 00:16:45.777 }, 00:16:45.777 "auth": { 00:16:45.777 "state": "completed", 00:16:45.777 "digest": "sha256", 00:16:45.777 "dhgroup": "ffdhe3072" 00:16:45.777 } 00:16:45.777 } 00:16:45.777 ]' 00:16:45.777 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.777 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.777 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.777 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.777 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.777 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.777 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.777 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.038 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:16:46.038 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:16:46.608 06:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.608 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.608 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.608 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.608 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.608 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.608 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.608 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.870 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.130 00:16:47.130 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.130 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.130 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.392 { 00:16:47.392 "cntlid": 23, 00:16:47.392 "qid": 0, 00:16:47.392 "state": "enabled", 00:16:47.392 "thread": "nvmf_tgt_poll_group_000", 00:16:47.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.392 "listen_address": { 00:16:47.392 "trtype": "TCP", 00:16:47.392 "adrfam": "IPv4", 00:16:47.392 "traddr": "10.0.0.2", 00:16:47.392 "trsvcid": "4420" 00:16:47.392 }, 00:16:47.392 "peer_address": { 00:16:47.392 "trtype": "TCP", 00:16:47.392 "adrfam": "IPv4", 00:16:47.392 "traddr": "10.0.0.1", 00:16:47.392 "trsvcid": "51418" 00:16:47.392 }, 00:16:47.392 "auth": { 00:16:47.392 "state": "completed", 00:16:47.392 "digest": "sha256", 00:16:47.392 "dhgroup": "ffdhe3072" 00:16:47.392 } 00:16:47.392 } 00:16:47.392 ]' 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.392 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.652 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:16:47.652 06:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:16:48.223 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.223 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.223 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.223 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.223 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.223 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.223 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.223 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.223 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.484 06:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.746 00:16:48.746 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.746 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.746 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.007 { 00:16:49.007 "cntlid": 25, 00:16:49.007 "qid": 0, 00:16:49.007 "state": "enabled", 00:16:49.007 "thread": "nvmf_tgt_poll_group_000", 00:16:49.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.007 "listen_address": { 00:16:49.007 "trtype": "TCP", 00:16:49.007 "adrfam": "IPv4", 00:16:49.007 "traddr": "10.0.0.2", 00:16:49.007 "trsvcid": "4420" 00:16:49.007 }, 00:16:49.007 "peer_address": { 00:16:49.007 "trtype": "TCP", 00:16:49.007 "adrfam": "IPv4", 00:16:49.007 "traddr": "10.0.0.1", 00:16:49.007 "trsvcid": "51442" 00:16:49.007 }, 00:16:49.007 "auth": { 00:16:49.007 "state": "completed", 00:16:49.007 "digest": "sha256", 00:16:49.007 "dhgroup": "ffdhe4096" 00:16:49.007 } 00:16:49.007 } 00:16:49.007 ]' 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.007 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.268 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:49.268 06:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:49.841 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.841 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.841 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.841 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.841 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.841 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.841 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.841 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.102 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.364 00:16:50.364 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.364 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.364 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.625 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.625 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.625 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.625 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.625 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.625 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.625 { 00:16:50.625 "cntlid": 27, 00:16:50.625 "qid": 0, 00:16:50.625 "state": "enabled", 00:16:50.625 "thread": "nvmf_tgt_poll_group_000", 00:16:50.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.625 "listen_address": { 00:16:50.625 "trtype": "TCP", 00:16:50.625 "adrfam": "IPv4", 00:16:50.625 "traddr": "10.0.0.2", 00:16:50.625 "trsvcid": "4420" 00:16:50.625 }, 00:16:50.625 "peer_address": { 00:16:50.625 "trtype": "TCP", 00:16:50.625 "adrfam": "IPv4", 00:16:50.625 "traddr": "10.0.0.1", 00:16:50.625 "trsvcid": "51478" 00:16:50.625 }, 00:16:50.625 "auth": { 00:16:50.625 "state": "completed", 00:16:50.625 "digest": "sha256", 00:16:50.625 "dhgroup": "ffdhe4096" 00:16:50.625 } 00:16:50.625 } 00:16:50.625 ]' 00:16:50.625 06:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.625 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.625 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.625 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.625 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.886 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.886 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.886 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.886 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:50.886 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:51.833 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.833 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.833 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.833 06:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.833 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.834 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.094 00:16:52.094 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.094 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.094 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.355 { 00:16:52.355 "cntlid": 29, 00:16:52.355 "qid": 0, 00:16:52.355 "state": "enabled", 00:16:52.355 "thread": "nvmf_tgt_poll_group_000", 00:16:52.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.355 "listen_address": { 00:16:52.355 "trtype": "TCP", 00:16:52.355 "adrfam": "IPv4", 00:16:52.355 "traddr": "10.0.0.2", 00:16:52.355 "trsvcid": "4420" 00:16:52.355 }, 00:16:52.355 "peer_address": { 00:16:52.355 "trtype": "TCP", 00:16:52.355 "adrfam": "IPv4", 00:16:52.355 "traddr": "10.0.0.1", 00:16:52.355 "trsvcid": "46634" 00:16:52.355 }, 00:16:52.355 "auth": { 00:16:52.355 "state": "completed", 00:16:52.355 "digest": "sha256", 00:16:52.355 "dhgroup": "ffdhe4096" 00:16:52.355 } 00:16:52.355 } 00:16:52.355 ]' 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.355 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.617 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:16:52.617 06:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:16:53.189 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.189 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.189 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.189 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.189 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.189 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.189 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.189 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.451 06:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.712 00:16:53.712 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.712 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.712 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.973 { 00:16:53.973 "cntlid": 31, 00:16:53.973 "qid": 0, 00:16:53.973 "state": "enabled", 00:16:53.973 "thread": "nvmf_tgt_poll_group_000", 00:16:53.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.973 "listen_address": { 00:16:53.973 "trtype": "TCP", 00:16:53.973 "adrfam": "IPv4", 00:16:53.973 "traddr": "10.0.0.2", 00:16:53.973 "trsvcid": "4420" 00:16:53.973 }, 00:16:53.973 "peer_address": { 00:16:53.973 "trtype": "TCP", 00:16:53.973 "adrfam": "IPv4", 00:16:53.973 "traddr": "10.0.0.1", 00:16:53.973 "trsvcid": "46660" 00:16:53.973 }, 00:16:53.973 "auth": { 00:16:53.973 "state": "completed", 00:16:53.973 "digest": "sha256", 00:16:53.973 "dhgroup": "ffdhe4096" 00:16:53.973 } 00:16:53.973 } 00:16:53.973 ]' 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.973 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.234 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.234 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.234 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.234 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:16:54.234 06:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:16:54.842 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.842 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.842 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.842 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.842 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.842 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.842 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.842 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.842 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.108 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.369 00:16:55.369 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.369 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.369 06:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.629 { 00:16:55.629 "cntlid": 33, 00:16:55.629 "qid": 0, 00:16:55.629 "state": "enabled", 00:16:55.629 "thread": "nvmf_tgt_poll_group_000", 00:16:55.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.629 "listen_address": { 00:16:55.629 "trtype": "TCP", 00:16:55.629 "adrfam": "IPv4", 00:16:55.629 "traddr": "10.0.0.2", 00:16:55.629 "trsvcid": "4420" 00:16:55.629 }, 00:16:55.629 "peer_address": { 00:16:55.629 "trtype": "TCP", 00:16:55.629 "adrfam": "IPv4", 00:16:55.629 "traddr": "10.0.0.1", 00:16:55.629 "trsvcid": "46672" 00:16:55.629 }, 00:16:55.629 "auth": { 00:16:55.629 "state": "completed", 00:16:55.629 "digest": "sha256", 00:16:55.629 "dhgroup": "ffdhe6144" 00:16:55.629 } 00:16:55.629 } 00:16:55.629 ]' 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.629 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.890 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.890 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.890 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.890 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:55.890 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:16:56.831 06:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.831 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.093 00:16:57.093 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.093 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.093 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.354 { 00:16:57.354 "cntlid": 35, 00:16:57.354 "qid": 0, 00:16:57.354 "state": "enabled", 00:16:57.354 "thread": "nvmf_tgt_poll_group_000", 00:16:57.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.354 "listen_address": { 00:16:57.354 "trtype": "TCP", 00:16:57.354 "adrfam": "IPv4", 00:16:57.354 "traddr": "10.0.0.2", 00:16:57.354 "trsvcid": "4420" 00:16:57.354 }, 00:16:57.354 "peer_address": { 00:16:57.354 "trtype": "TCP", 00:16:57.354 "adrfam": "IPv4", 00:16:57.354 "traddr": "10.0.0.1", 00:16:57.354 "trsvcid": "46706" 00:16:57.354 }, 00:16:57.354 "auth": { 00:16:57.354 "state": "completed", 00:16:57.354 "digest": "sha256", 00:16:57.354 "dhgroup": "ffdhe6144" 00:16:57.354 } 00:16:57.354 } 00:16:57.354 ]' 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.354 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.615 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.615 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.615 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.615 06:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.615 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:57.615 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.559 06:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.819 00:16:58.819 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.819 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.819 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.080 { 00:16:59.080 "cntlid": 37, 00:16:59.080 "qid": 0, 00:16:59.080 "state": "enabled", 00:16:59.080 "thread": "nvmf_tgt_poll_group_000", 00:16:59.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.080 "listen_address": { 00:16:59.080 "trtype": "TCP", 00:16:59.080 "adrfam": "IPv4", 00:16:59.080 "traddr": "10.0.0.2", 00:16:59.080 "trsvcid": "4420" 00:16:59.080 }, 00:16:59.080 "peer_address": { 00:16:59.080 "trtype": "TCP", 00:16:59.080 "adrfam": "IPv4", 00:16:59.080 "traddr": "10.0.0.1", 00:16:59.080 "trsvcid": "46742" 00:16:59.080 }, 00:16:59.080 "auth": { 00:16:59.080 "state": "completed", 00:16:59.080 "digest": "sha256", 00:16:59.080 "dhgroup": "ffdhe6144" 00:16:59.080 } 00:16:59.080 } 00:16:59.080 ]' 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.080 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.341 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.341 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.341 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.341 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:16:59.341 06:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:00.282 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.283 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.283 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.283 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.283 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.283 06:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.543 00:17:00.543 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.543 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.543 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.803 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.803 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.803 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.803 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.803 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.803 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.803 { 00:17:00.803 "cntlid": 39, 00:17:00.803 "qid": 0, 00:17:00.803 "state": "enabled", 00:17:00.803 "thread": "nvmf_tgt_poll_group_000", 00:17:00.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.803 "listen_address": { 00:17:00.803 "trtype": "TCP", 00:17:00.803 "adrfam": "IPv4", 00:17:00.803 "traddr": "10.0.0.2", 00:17:00.803 "trsvcid": "4420" 00:17:00.803 }, 00:17:00.803 "peer_address": { 00:17:00.803 "trtype": "TCP", 00:17:00.803 "adrfam": "IPv4", 00:17:00.803 "traddr": "10.0.0.1", 00:17:00.803 "trsvcid": "46762" 00:17:00.803 }, 00:17:00.803 "auth": { 00:17:00.803 "state": "completed", 00:17:00.803 "digest": "sha256", 00:17:00.803 "dhgroup": "ffdhe6144" 00:17:00.803 } 00:17:00.803 } 00:17:00.803 ]' 00:17:00.803 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.803 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.803 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.064 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.064 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.064 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.064 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.064 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.064 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:01.064 07:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.004 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.574 00:17:02.574 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.574 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.574 07:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.835 { 00:17:02.835 "cntlid": 41, 00:17:02.835 "qid": 0, 00:17:02.835 "state": "enabled", 00:17:02.835 "thread": "nvmf_tgt_poll_group_000", 00:17:02.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.835 "listen_address": { 00:17:02.835 "trtype": "TCP", 00:17:02.835 "adrfam": "IPv4", 00:17:02.835 "traddr": "10.0.0.2", 00:17:02.835 "trsvcid": "4420" 00:17:02.835 }, 00:17:02.835 "peer_address": { 00:17:02.835 "trtype": "TCP", 00:17:02.835 "adrfam": "IPv4", 00:17:02.835 "traddr": "10.0.0.1", 00:17:02.835 "trsvcid": "41450" 00:17:02.835 }, 00:17:02.835 "auth": { 00:17:02.835 "state": "completed", 00:17:02.835 "digest": "sha256", 00:17:02.835 "dhgroup": "ffdhe8192" 00:17:02.835 } 00:17:02.835 } 00:17:02.835 ]' 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.835 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.094 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:03.094 07:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:03.662 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.662 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.662 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.662 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.662 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.662 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.662 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.662 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.921 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.493 00:17:04.493 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.493 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.493 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.493 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.493 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.493 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.493 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.493 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.493 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.493 { 00:17:04.493 "cntlid": 43, 00:17:04.493 "qid": 0, 00:17:04.493 "state": "enabled", 00:17:04.493 "thread": "nvmf_tgt_poll_group_000", 00:17:04.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.493 "listen_address": { 00:17:04.493 "trtype": "TCP", 00:17:04.493 "adrfam": "IPv4", 00:17:04.493 "traddr": "10.0.0.2", 00:17:04.493 "trsvcid": "4420" 00:17:04.493 }, 00:17:04.493 "peer_address": { 00:17:04.493 "trtype": "TCP", 00:17:04.493 "adrfam": "IPv4", 00:17:04.493 "traddr": "10.0.0.1", 00:17:04.493 "trsvcid": "41484" 00:17:04.493 }, 00:17:04.493 "auth": { 00:17:04.493 "state": "completed", 00:17:04.493 "digest": "sha256", 00:17:04.493 "dhgroup": "ffdhe8192" 00:17:04.493 } 00:17:04.493 } 00:17:04.493 ]' 00:17:04.753 07:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.753 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.753 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.753 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.753 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.753 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.753 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.753 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.014 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:05.014 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:05.584 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.584 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.584 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.584 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.584 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.584 07:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.584 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.584 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.845 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.415 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.415 { 00:17:06.415 "cntlid": 45, 00:17:06.415 "qid": 0, 00:17:06.415 "state": "enabled", 00:17:06.415 "thread": "nvmf_tgt_poll_group_000", 00:17:06.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.415 "listen_address": { 00:17:06.415 "trtype": "TCP", 00:17:06.415 "adrfam": "IPv4", 00:17:06.415 "traddr": "10.0.0.2", 00:17:06.415 "trsvcid": "4420" 00:17:06.415 }, 00:17:06.415 "peer_address": { 00:17:06.415 "trtype": "TCP", 00:17:06.415 "adrfam": "IPv4", 00:17:06.415 "traddr": "10.0.0.1", 00:17:06.415 "trsvcid": "41514" 00:17:06.415 }, 00:17:06.415 "auth": { 00:17:06.415 "state": "completed", 00:17:06.415 "digest": "sha256", 00:17:06.415 "dhgroup": "ffdhe8192" 00:17:06.415 } 00:17:06.415 } 00:17:06.415 ]' 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.415 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.676 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.676 07:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.676 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.676 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.676 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.936 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:06.936 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:07.507 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.507 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.507 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.507 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.507 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.507 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.507 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.507 07:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.768 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.028 00:17:08.288 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.288 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.288 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.288 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.288 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.288 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.288 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.288 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.288 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.288 { 00:17:08.288 "cntlid": 47, 00:17:08.288 "qid": 0, 00:17:08.288 "state": "enabled", 00:17:08.288 "thread": "nvmf_tgt_poll_group_000", 00:17:08.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.288 "listen_address": { 00:17:08.288 "trtype": "TCP", 00:17:08.288 "adrfam": "IPv4", 00:17:08.288 "traddr": "10.0.0.2", 00:17:08.288 "trsvcid": "4420" 00:17:08.288 }, 00:17:08.288 "peer_address": { 00:17:08.288 "trtype": "TCP", 00:17:08.288 "adrfam": "IPv4", 00:17:08.288 "traddr": "10.0.0.1", 00:17:08.288 "trsvcid": "41546" 00:17:08.288 }, 00:17:08.288 "auth": { 00:17:08.288 "state": "completed", 00:17:08.288 "digest": "sha256", 00:17:08.288 "dhgroup": "ffdhe8192" 00:17:08.288 } 00:17:08.288 } 00:17:08.288 ]' 00:17:08.289 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.549 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.549 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.549 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.549 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.549 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.549 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.549 07:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.809 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:08.809 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.380 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.641 07:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.641 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.902 { 00:17:09.902 "cntlid": 49, 00:17:09.902 "qid": 0, 00:17:09.902 "state": "enabled", 00:17:09.902 "thread": "nvmf_tgt_poll_group_000", 00:17:09.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:09.902 "listen_address": { 00:17:09.902 "trtype": "TCP", 00:17:09.902 "adrfam": "IPv4", 00:17:09.902 "traddr": "10.0.0.2", 00:17:09.902 "trsvcid": "4420" 00:17:09.902 }, 00:17:09.902 "peer_address": { 00:17:09.902 "trtype": "TCP", 00:17:09.902 "adrfam": "IPv4", 00:17:09.902 "traddr": "10.0.0.1", 00:17:09.902 "trsvcid": "41584" 00:17:09.902 }, 00:17:09.902 "auth": { 00:17:09.902 "state": "completed", 00:17:09.902 "digest": "sha384", 00:17:09.902 "dhgroup": "null" 00:17:09.902 } 00:17:09.902 } 00:17:09.902 ]' 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.902 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.164 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:10.164 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.164 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.164 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.164 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.164 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:10.164 07:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.110 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.371 00:17:11.371 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.371 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.371 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.633 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.633 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.633 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.634 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.634 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.634 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.634 { 00:17:11.634 "cntlid": 51, 00:17:11.634 "qid": 0, 00:17:11.634 "state": "enabled", 00:17:11.634 "thread": "nvmf_tgt_poll_group_000", 00:17:11.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.634 "listen_address": { 00:17:11.634 "trtype": "TCP", 00:17:11.634 "adrfam": "IPv4", 00:17:11.634 "traddr": "10.0.0.2", 00:17:11.634 "trsvcid": "4420" 00:17:11.634 }, 00:17:11.634 "peer_address": { 00:17:11.634 "trtype": "TCP", 00:17:11.634 "adrfam": "IPv4", 00:17:11.634 "traddr": "10.0.0.1", 00:17:11.634 "trsvcid": "41618" 00:17:11.634 }, 00:17:11.634 "auth": { 00:17:11.634 "state": "completed", 00:17:11.634 "digest": "sha384", 00:17:11.634 "dhgroup": "null" 00:17:11.634 } 00:17:11.634 } 00:17:11.634 ]' 00:17:11.634 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.634 07:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.634 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.634 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.634 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.634 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.634 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.634 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.895 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:11.895 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:12.466 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.466 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.466 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.466 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.466 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.466 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.466 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.467 07:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.727 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.987 00:17:12.987 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.987 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.987 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.248 { 00:17:13.248 "cntlid": 53, 00:17:13.248 "qid": 0, 00:17:13.248 "state": "enabled", 00:17:13.248 "thread": "nvmf_tgt_poll_group_000", 00:17:13.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.248 "listen_address": { 00:17:13.248 "trtype": "TCP", 00:17:13.248 "adrfam": "IPv4", 00:17:13.248 "traddr": "10.0.0.2", 00:17:13.248 "trsvcid": "4420" 00:17:13.248 }, 00:17:13.248 "peer_address": { 00:17:13.248 "trtype": "TCP", 00:17:13.248 "adrfam": "IPv4", 00:17:13.248 "traddr": "10.0.0.1", 00:17:13.248 "trsvcid": "52060" 00:17:13.248 }, 00:17:13.248 "auth": { 00:17:13.248 "state": "completed", 00:17:13.248 "digest": "sha384", 00:17:13.248 "dhgroup": "null" 00:17:13.248 } 00:17:13.248 } 00:17:13.248 ]' 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.248 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.510 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:13.510 07:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:14.081 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.081 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.081 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.081 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.081 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.081 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.081 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.081 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.342 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.602 00:17:14.602 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.602 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.602 07:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.863 { 00:17:14.863 "cntlid": 55, 00:17:14.863 "qid": 0, 00:17:14.863 "state": "enabled", 00:17:14.863 "thread": "nvmf_tgt_poll_group_000", 00:17:14.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.863 "listen_address": { 00:17:14.863 "trtype": "TCP", 00:17:14.863 "adrfam": "IPv4", 00:17:14.863 "traddr": "10.0.0.2", 00:17:14.863 "trsvcid": "4420" 00:17:14.863 }, 00:17:14.863 "peer_address": { 00:17:14.863 "trtype": "TCP", 00:17:14.863 "adrfam": "IPv4", 00:17:14.863 "traddr": "10.0.0.1", 00:17:14.863 "trsvcid": "52088" 00:17:14.863 }, 00:17:14.863 "auth": { 00:17:14.863 "state": "completed", 00:17:14.863 "digest": "sha384", 00:17:14.863 "dhgroup": "null" 00:17:14.863 } 00:17:14.863 } 00:17:14.863 ]' 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.863 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.123 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:15.123 07:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:15.694 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.694 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.694 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.694 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.694 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.694 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.694 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.694 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.694 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.955 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.216 00:17:16.216 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.216 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.216 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.477 { 00:17:16.477 "cntlid": 57, 00:17:16.477 "qid": 0, 00:17:16.477 "state": "enabled", 00:17:16.477 "thread": "nvmf_tgt_poll_group_000", 00:17:16.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.477 "listen_address": { 00:17:16.477 "trtype": "TCP", 00:17:16.477 "adrfam": "IPv4", 00:17:16.477 "traddr": "10.0.0.2", 00:17:16.477 "trsvcid": "4420" 00:17:16.477 }, 00:17:16.477 "peer_address": { 00:17:16.477 "trtype": "TCP", 00:17:16.477 "adrfam": "IPv4", 00:17:16.477 "traddr": "10.0.0.1", 00:17:16.477 "trsvcid": "52116" 00:17:16.477 }, 00:17:16.477 "auth": { 00:17:16.477 "state": "completed", 00:17:16.477 "digest": "sha384", 00:17:16.477 "dhgroup": "ffdhe2048" 00:17:16.477 } 00:17:16.477 } 00:17:16.477 ]' 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.477 07:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.738 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:16.738 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:17.311 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.572 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.572 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.572 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.572 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.572 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.572 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.572 07:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.572 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.833 00:17:17.833 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.833 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.833 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.093 { 00:17:18.093 "cntlid": 59, 00:17:18.093 "qid": 0, 00:17:18.093 "state": "enabled", 00:17:18.093 "thread": "nvmf_tgt_poll_group_000", 00:17:18.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.093 "listen_address": { 00:17:18.093 "trtype": "TCP", 00:17:18.093 "adrfam": "IPv4", 00:17:18.093 "traddr": "10.0.0.2", 00:17:18.093 "trsvcid": "4420" 00:17:18.093 }, 00:17:18.093 "peer_address": { 00:17:18.093 "trtype": "TCP", 00:17:18.093 "adrfam": "IPv4", 00:17:18.093 "traddr": "10.0.0.1", 00:17:18.093 "trsvcid": "52134" 00:17:18.093 }, 00:17:18.093 "auth": { 00:17:18.093 "state": "completed", 00:17:18.093 "digest": "sha384", 00:17:18.093 "dhgroup": "ffdhe2048" 00:17:18.093 } 00:17:18.093 } 00:17:18.093 ]' 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.093 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.354 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:18.354 07:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:18.925 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.925 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.925 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.925 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.186 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.447 00:17:19.447 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.447 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.447 07:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.707 { 00:17:19.707 "cntlid": 61, 00:17:19.707 "qid": 0, 00:17:19.707 "state": "enabled", 00:17:19.707 "thread": "nvmf_tgt_poll_group_000", 00:17:19.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.707 "listen_address": { 00:17:19.707 "trtype": "TCP", 00:17:19.707 "adrfam": "IPv4", 00:17:19.707 "traddr": "10.0.0.2", 00:17:19.707 "trsvcid": "4420" 00:17:19.707 }, 00:17:19.707 "peer_address": { 00:17:19.707 "trtype": "TCP", 00:17:19.707 "adrfam": "IPv4", 00:17:19.707 "traddr": "10.0.0.1", 00:17:19.707 "trsvcid": "52156" 00:17:19.707 }, 00:17:19.707 "auth": { 00:17:19.707 "state": "completed", 00:17:19.707 "digest": "sha384", 00:17:19.707 "dhgroup": "ffdhe2048" 00:17:19.707 } 00:17:19.707 } 00:17:19.707 ]' 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.707 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.968 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:19.968 07:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:20.540 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.540 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.540 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.540 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.800 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.060 00:17:21.060 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.060 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.060 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.320 { 00:17:21.320 "cntlid": 63, 00:17:21.320 "qid": 0, 00:17:21.320 "state": "enabled", 00:17:21.320 "thread": "nvmf_tgt_poll_group_000", 00:17:21.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.320 "listen_address": { 00:17:21.320 "trtype": "TCP", 00:17:21.320 "adrfam": "IPv4", 00:17:21.320 "traddr": "10.0.0.2", 00:17:21.320 "trsvcid": "4420" 00:17:21.320 }, 00:17:21.320 "peer_address": { 00:17:21.320 "trtype": "TCP", 00:17:21.320 "adrfam": "IPv4", 00:17:21.320 "traddr": "10.0.0.1", 00:17:21.320 "trsvcid": "52182" 00:17:21.320 }, 00:17:21.320 "auth": { 00:17:21.320 "state": "completed", 00:17:21.320 "digest": "sha384", 00:17:21.320 "dhgroup": "ffdhe2048" 00:17:21.320 } 00:17:21.320 } 00:17:21.320 ]' 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.320 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.581 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.581 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.581 07:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.581 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:21.581 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:22.151 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.412 07:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.672 00:17:22.672 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.672 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.672 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.933 { 00:17:22.933 "cntlid": 65, 00:17:22.933 "qid": 0, 00:17:22.933 "state": "enabled", 00:17:22.933 "thread": "nvmf_tgt_poll_group_000", 00:17:22.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.933 "listen_address": { 00:17:22.933 "trtype": "TCP", 00:17:22.933 "adrfam": "IPv4", 00:17:22.933 "traddr": "10.0.0.2", 00:17:22.933 "trsvcid": "4420" 00:17:22.933 }, 00:17:22.933 "peer_address": { 00:17:22.933 "trtype": "TCP", 00:17:22.933 "adrfam": "IPv4", 00:17:22.933 "traddr": "10.0.0.1", 00:17:22.933 "trsvcid": "45864" 00:17:22.933 }, 00:17:22.933 "auth": { 00:17:22.933 "state": "completed", 00:17:22.933 "digest": "sha384", 00:17:22.933 "dhgroup": "ffdhe3072" 00:17:22.933 } 00:17:22.933 } 00:17:22.933 ]' 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.933 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.194 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.194 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.194 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.194 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:23.194 07:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.136 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.397 00:17:24.397 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.397 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.397 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.658 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.658 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.658 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.658 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.658 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.658 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.658 { 00:17:24.658 "cntlid": 67, 00:17:24.658 "qid": 0, 00:17:24.658 "state": "enabled", 00:17:24.658 "thread": "nvmf_tgt_poll_group_000", 00:17:24.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.658 "listen_address": { 00:17:24.658 "trtype": "TCP", 00:17:24.658 "adrfam": "IPv4", 00:17:24.658 "traddr": "10.0.0.2", 00:17:24.658 "trsvcid": "4420" 00:17:24.658 }, 00:17:24.658 "peer_address": { 00:17:24.658 "trtype": "TCP", 00:17:24.658 "adrfam": "IPv4", 00:17:24.658 "traddr": "10.0.0.1", 00:17:24.658 "trsvcid": "45882" 00:17:24.658 }, 00:17:24.658 "auth": { 00:17:24.658 "state": "completed", 00:17:24.658 "digest": "sha384", 00:17:24.658 "dhgroup": "ffdhe3072" 00:17:24.658 } 00:17:24.658 } 00:17:24.658 ]' 00:17:24.658 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.658 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.658 07:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.658 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.658 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.658 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.658 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.658 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.919 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:24.919 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:25.491 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.491 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.491 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.491 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.491 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.491 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.491 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.491 07:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.752 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.013 00:17:26.013 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.013 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.013 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.274 { 00:17:26.274 "cntlid": 69, 00:17:26.274 "qid": 0, 00:17:26.274 "state": "enabled", 00:17:26.274 "thread": "nvmf_tgt_poll_group_000", 00:17:26.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.274 "listen_address": { 00:17:26.274 "trtype": "TCP", 00:17:26.274 "adrfam": "IPv4", 00:17:26.274 "traddr": "10.0.0.2", 00:17:26.274 "trsvcid": "4420" 00:17:26.274 }, 00:17:26.274 "peer_address": { 00:17:26.274 "trtype": "TCP", 00:17:26.274 "adrfam": "IPv4", 00:17:26.274 "traddr": "10.0.0.1", 00:17:26.274 "trsvcid": "45904" 00:17:26.274 }, 00:17:26.274 "auth": { 00:17:26.274 "state": "completed", 00:17:26.274 "digest": "sha384", 00:17:26.274 "dhgroup": "ffdhe3072" 00:17:26.274 } 00:17:26.274 } 00:17:26.274 ]' 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.274 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.535 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:26.535 07:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:27.105 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.105 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.105 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.105 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.105 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.105 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.105 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.105 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.367 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.628 00:17:27.628 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.628 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.628 07:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.888 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.888 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.888 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.888 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.888 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.888 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.888 { 00:17:27.888 "cntlid": 71, 00:17:27.888 "qid": 0, 00:17:27.888 "state": "enabled", 00:17:27.888 "thread": "nvmf_tgt_poll_group_000", 00:17:27.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.888 "listen_address": { 00:17:27.888 "trtype": "TCP", 00:17:27.888 "adrfam": "IPv4", 00:17:27.888 "traddr": "10.0.0.2", 00:17:27.888 "trsvcid": "4420" 00:17:27.888 }, 00:17:27.888 "peer_address": { 00:17:27.888 "trtype": "TCP", 00:17:27.888 "adrfam": "IPv4", 00:17:27.888 "traddr": "10.0.0.1", 00:17:27.888 "trsvcid": "45932" 00:17:27.888 }, 00:17:27.888 "auth": { 00:17:27.888 "state": "completed", 00:17:27.888 "digest": "sha384", 00:17:27.888 "dhgroup": "ffdhe3072" 00:17:27.888 } 00:17:27.888 } 00:17:27.888 ]' 00:17:27.888 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.888 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.889 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.889 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.889 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.889 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.889 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.889 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.148 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:28.148 07:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:28.719 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.719 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.719 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.719 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.719 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.719 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.719 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.719 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.719 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.980 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.240 00:17:29.240 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.240 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.240 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.501 { 00:17:29.501 "cntlid": 73, 00:17:29.501 "qid": 0, 00:17:29.501 "state": "enabled", 00:17:29.501 "thread": "nvmf_tgt_poll_group_000", 00:17:29.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.501 "listen_address": { 00:17:29.501 "trtype": "TCP", 00:17:29.501 "adrfam": "IPv4", 00:17:29.501 "traddr": "10.0.0.2", 00:17:29.501 "trsvcid": "4420" 00:17:29.501 }, 00:17:29.501 "peer_address": { 00:17:29.501 "trtype": "TCP", 00:17:29.501 "adrfam": "IPv4", 00:17:29.501 "traddr": "10.0.0.1", 00:17:29.501 "trsvcid": "45964" 00:17:29.501 }, 00:17:29.501 "auth": { 00:17:29.501 "state": "completed", 00:17:29.501 "digest": "sha384", 00:17:29.501 "dhgroup": "ffdhe4096" 00:17:29.501 } 00:17:29.501 } 00:17:29.501 ]' 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.501 07:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.761 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:29.761 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:30.332 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.332 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.332 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.332 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.332 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.332 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.332 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.332 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.592 07:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.853 00:17:30.853 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.853 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.853 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.113 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.113 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.113 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.113 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.113 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.113 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.114 { 00:17:31.114 "cntlid": 75, 00:17:31.114 "qid": 0, 00:17:31.114 "state": "enabled", 00:17:31.114 "thread": "nvmf_tgt_poll_group_000", 00:17:31.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.114 "listen_address": { 00:17:31.114 "trtype": "TCP", 00:17:31.114 "adrfam": "IPv4", 00:17:31.114 "traddr": "10.0.0.2", 00:17:31.114 "trsvcid": "4420" 00:17:31.114 }, 00:17:31.114 "peer_address": { 00:17:31.114 "trtype": "TCP", 00:17:31.114 "adrfam": "IPv4", 00:17:31.114 "traddr": "10.0.0.1", 00:17:31.114 "trsvcid": "45992" 00:17:31.114 }, 00:17:31.114 "auth": { 00:17:31.114 "state": "completed", 00:17:31.114 "digest": "sha384", 00:17:31.114 "dhgroup": "ffdhe4096" 00:17:31.114 } 00:17:31.114 } 00:17:31.114 ]' 00:17:31.114 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.114 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.114 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.114 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.114 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.114 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.114 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.114 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.374 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:31.374 07:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:31.944 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.944 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.944 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.944 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.944 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.944 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.944 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.944 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.204 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.205 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.205 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.466 00:17:32.466 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.466 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.466 07:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.726 { 00:17:32.726 "cntlid": 77, 00:17:32.726 "qid": 0, 00:17:32.726 "state": "enabled", 00:17:32.726 "thread": "nvmf_tgt_poll_group_000", 00:17:32.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.726 "listen_address": { 00:17:32.726 "trtype": "TCP", 00:17:32.726 "adrfam": "IPv4", 00:17:32.726 "traddr": "10.0.0.2", 00:17:32.726 "trsvcid": "4420" 00:17:32.726 }, 00:17:32.726 "peer_address": { 00:17:32.726 "trtype": "TCP", 00:17:32.726 "adrfam": "IPv4", 00:17:32.726 "traddr": "10.0.0.1", 00:17:32.726 "trsvcid": "53762" 00:17:32.726 }, 00:17:32.726 "auth": { 00:17:32.726 "state": "completed", 00:17:32.726 "digest": "sha384", 00:17:32.726 "dhgroup": "ffdhe4096" 00:17:32.726 } 00:17:32.726 } 00:17:32.726 ]' 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.726 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.986 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:32.987 07:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:33.557 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.557 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.557 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.557 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.557 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.557 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.557 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:33.557 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.846 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.154 00:17:34.154 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.154 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.154 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.442 { 00:17:34.442 "cntlid": 79, 00:17:34.442 "qid": 0, 00:17:34.442 "state": "enabled", 00:17:34.442 "thread": "nvmf_tgt_poll_group_000", 00:17:34.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.442 "listen_address": { 00:17:34.442 "trtype": "TCP", 00:17:34.442 "adrfam": "IPv4", 00:17:34.442 "traddr": "10.0.0.2", 00:17:34.442 "trsvcid": "4420" 00:17:34.442 }, 00:17:34.442 "peer_address": { 00:17:34.442 "trtype": "TCP", 00:17:34.442 "adrfam": "IPv4", 00:17:34.442 "traddr": "10.0.0.1", 00:17:34.442 "trsvcid": "53782" 00:17:34.442 }, 00:17:34.442 "auth": { 00:17:34.442 "state": "completed", 00:17:34.442 "digest": "sha384", 00:17:34.442 "dhgroup": "ffdhe4096" 00:17:34.442 } 00:17:34.442 } 00:17:34.442 ]' 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.442 07:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.718 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:34.718 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:35.290 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.290 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.290 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.290 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.290 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.290 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.290 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.290 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.290 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.551 07:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.812 00:17:35.812 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.812 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.812 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.073 { 00:17:36.073 "cntlid": 81, 00:17:36.073 "qid": 0, 00:17:36.073 "state": "enabled", 00:17:36.073 "thread": "nvmf_tgt_poll_group_000", 00:17:36.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.073 "listen_address": { 00:17:36.073 "trtype": "TCP", 00:17:36.073 "adrfam": "IPv4", 00:17:36.073 "traddr": "10.0.0.2", 00:17:36.073 "trsvcid": "4420" 00:17:36.073 }, 00:17:36.073 "peer_address": { 00:17:36.073 "trtype": "TCP", 00:17:36.073 "adrfam": "IPv4", 00:17:36.073 "traddr": "10.0.0.1", 00:17:36.073 "trsvcid": "53806" 00:17:36.073 }, 00:17:36.073 "auth": { 00:17:36.073 "state": "completed", 00:17:36.073 "digest": "sha384", 00:17:36.073 "dhgroup": "ffdhe6144" 00:17:36.073 } 00:17:36.073 } 00:17:36.073 ]' 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.073 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.074 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.074 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.334 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:36.334 07:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:36.903 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.903 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.903 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.903 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.904 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.904 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.904 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:36.904 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.163 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.164 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.164 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.164 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.424 00:17:37.684 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.684 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.684 07:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.684 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.684 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.684 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.684 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.684 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.684 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.684 { 00:17:37.684 "cntlid": 83, 00:17:37.684 "qid": 0, 00:17:37.684 "state": "enabled", 00:17:37.684 "thread": "nvmf_tgt_poll_group_000", 00:17:37.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.684 "listen_address": { 00:17:37.684 "trtype": "TCP", 00:17:37.684 "adrfam": "IPv4", 00:17:37.684 "traddr": "10.0.0.2", 00:17:37.684 "trsvcid": "4420" 00:17:37.684 }, 00:17:37.685 "peer_address": { 00:17:37.685 "trtype": "TCP", 00:17:37.685 "adrfam": "IPv4", 00:17:37.685 "traddr": "10.0.0.1", 00:17:37.685 "trsvcid": "53818" 00:17:37.685 }, 00:17:37.685 "auth": { 00:17:37.685 "state": "completed", 00:17:37.685 "digest": "sha384", 00:17:37.685 "dhgroup": "ffdhe6144" 00:17:37.685 } 00:17:37.685 } 00:17:37.685 ]' 00:17:37.685 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.685 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.685 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.943 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.943 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.943 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.943 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.943 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.943 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:37.943 07:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:38.882 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.882 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.883 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.143 00:17:39.143 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.143 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.143 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.404 { 00:17:39.404 "cntlid": 85, 00:17:39.404 "qid": 0, 00:17:39.404 "state": "enabled", 00:17:39.404 "thread": "nvmf_tgt_poll_group_000", 00:17:39.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.404 "listen_address": { 00:17:39.404 "trtype": "TCP", 00:17:39.404 "adrfam": "IPv4", 00:17:39.404 "traddr": "10.0.0.2", 00:17:39.404 "trsvcid": "4420" 00:17:39.404 }, 00:17:39.404 "peer_address": { 00:17:39.404 "trtype": "TCP", 00:17:39.404 "adrfam": "IPv4", 00:17:39.404 "traddr": "10.0.0.1", 00:17:39.404 "trsvcid": "53842" 00:17:39.404 }, 00:17:39.404 "auth": { 00:17:39.404 "state": "completed", 00:17:39.404 "digest": "sha384", 00:17:39.404 "dhgroup": "ffdhe6144" 00:17:39.404 } 00:17:39.404 } 00:17:39.404 ]' 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.404 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.665 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.665 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.665 07:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.665 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:39.665 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.606 07:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.867 00:17:40.867 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.867 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.867 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.127 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.127 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.127 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.127 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.127 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.127 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.127 { 00:17:41.127 "cntlid": 87, 00:17:41.127 "qid": 0, 00:17:41.127 "state": "enabled", 00:17:41.127 "thread": "nvmf_tgt_poll_group_000", 00:17:41.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.127 "listen_address": { 00:17:41.127 "trtype": "TCP", 00:17:41.127 "adrfam": "IPv4", 00:17:41.127 "traddr": "10.0.0.2", 00:17:41.127 "trsvcid": "4420" 00:17:41.127 }, 00:17:41.127 "peer_address": { 00:17:41.127 "trtype": "TCP", 00:17:41.127 "adrfam": "IPv4", 00:17:41.127 "traddr": "10.0.0.1", 00:17:41.127 "trsvcid": "53872" 00:17:41.127 }, 00:17:41.127 "auth": { 00:17:41.127 "state": "completed", 00:17:41.127 "digest": "sha384", 00:17:41.127 "dhgroup": "ffdhe6144" 00:17:41.127 } 00:17:41.127 } 00:17:41.127 ]' 00:17:41.127 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.127 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.128 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.128 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.128 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.388 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.388 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.388 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.388 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:41.388 07:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:41.959 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.220 07:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.792 00:17:42.792 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.792 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.792 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.053 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.053 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.053 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.053 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.053 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.053 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.053 { 00:17:43.053 "cntlid": 89, 00:17:43.053 "qid": 0, 00:17:43.053 "state": "enabled", 00:17:43.053 "thread": "nvmf_tgt_poll_group_000", 00:17:43.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.053 "listen_address": { 00:17:43.053 "trtype": "TCP", 00:17:43.053 "adrfam": "IPv4", 00:17:43.053 "traddr": "10.0.0.2", 00:17:43.053 "trsvcid": "4420" 00:17:43.053 }, 00:17:43.053 "peer_address": { 00:17:43.053 "trtype": "TCP", 00:17:43.053 "adrfam": "IPv4", 00:17:43.053 "traddr": "10.0.0.1", 00:17:43.053 "trsvcid": "55518" 00:17:43.053 }, 00:17:43.053 "auth": { 00:17:43.053 "state": "completed", 00:17:43.053 "digest": "sha384", 00:17:43.053 "dhgroup": "ffdhe8192" 00:17:43.053 } 00:17:43.053 } 00:17:43.053 ]' 00:17:43.053 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.054 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.054 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.054 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.054 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.054 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.054 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.054 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.314 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:43.314 07:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:43.886 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.886 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.886 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.886 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.886 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.886 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.886 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.886 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.146 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.147 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.147 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.718 00:17:44.718 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.718 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.718 07:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.718 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.718 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.718 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.718 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.718 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.718 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.718 { 00:17:44.718 "cntlid": 91, 00:17:44.718 "qid": 0, 00:17:44.718 "state": "enabled", 00:17:44.718 "thread": "nvmf_tgt_poll_group_000", 00:17:44.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.718 "listen_address": { 00:17:44.718 "trtype": "TCP", 00:17:44.718 "adrfam": "IPv4", 00:17:44.718 "traddr": "10.0.0.2", 00:17:44.718 "trsvcid": "4420" 00:17:44.718 }, 00:17:44.718 "peer_address": { 00:17:44.718 "trtype": "TCP", 00:17:44.718 "adrfam": "IPv4", 00:17:44.718 "traddr": "10.0.0.1", 00:17:44.718 "trsvcid": "55542" 00:17:44.718 }, 00:17:44.718 "auth": { 00:17:44.718 "state": "completed", 00:17:44.718 "digest": "sha384", 00:17:44.718 "dhgroup": "ffdhe8192" 00:17:44.718 } 00:17:44.718 } 00:17:44.718 ]' 00:17:44.718 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.718 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.718 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.979 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.979 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.979 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.979 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.979 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.980 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:44.980 07:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.922 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.494 00:17:46.494 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.494 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.494 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.494 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.494 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.494 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.494 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.755 07:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.755 { 00:17:46.755 "cntlid": 93, 00:17:46.755 "qid": 0, 00:17:46.755 "state": "enabled", 00:17:46.755 "thread": "nvmf_tgt_poll_group_000", 00:17:46.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.755 "listen_address": { 00:17:46.755 "trtype": "TCP", 00:17:46.755 "adrfam": "IPv4", 00:17:46.755 "traddr": "10.0.0.2", 00:17:46.755 "trsvcid": "4420" 00:17:46.755 }, 00:17:46.755 "peer_address": { 00:17:46.755 "trtype": "TCP", 00:17:46.755 "adrfam": "IPv4", 00:17:46.755 "traddr": "10.0.0.1", 00:17:46.755 "trsvcid": "55558" 00:17:46.755 }, 00:17:46.755 "auth": { 00:17:46.755 "state": "completed", 00:17:46.755 "digest": "sha384", 00:17:46.755 "dhgroup": "ffdhe8192" 00:17:46.755 } 00:17:46.755 } 00:17:46.755 ]' 00:17:46.755 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.755 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.755 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.755 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.755 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.755 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.755 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.755 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.016 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:47.016 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:47.588 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.588 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.588 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.588 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.588 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.588 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.588 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.589 07:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.849 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.110 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.371 { 00:17:48.371 "cntlid": 95, 00:17:48.371 "qid": 0, 00:17:48.371 "state": "enabled", 00:17:48.371 "thread": "nvmf_tgt_poll_group_000", 00:17:48.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.371 "listen_address": { 00:17:48.371 "trtype": "TCP", 00:17:48.371 "adrfam": "IPv4", 00:17:48.371 "traddr": "10.0.0.2", 00:17:48.371 "trsvcid": "4420" 00:17:48.371 }, 00:17:48.371 "peer_address": { 00:17:48.371 "trtype": "TCP", 00:17:48.371 "adrfam": "IPv4", 00:17:48.371 "traddr": "10.0.0.1", 00:17:48.371 "trsvcid": "55586" 00:17:48.371 }, 00:17:48.371 "auth": { 00:17:48.371 "state": "completed", 00:17:48.371 "digest": "sha384", 00:17:48.371 "dhgroup": "ffdhe8192" 00:17:48.371 } 00:17:48.371 } 00:17:48.371 ]' 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.371 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.633 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.633 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.633 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.633 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.633 07:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.634 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:48.634 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.576 07:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.838 00:17:49.838 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.838 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.838 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.099 { 00:17:50.099 "cntlid": 97, 00:17:50.099 "qid": 0, 00:17:50.099 "state": "enabled", 00:17:50.099 "thread": "nvmf_tgt_poll_group_000", 00:17:50.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.099 "listen_address": { 00:17:50.099 "trtype": "TCP", 00:17:50.099 "adrfam": "IPv4", 00:17:50.099 "traddr": "10.0.0.2", 00:17:50.099 "trsvcid": "4420" 00:17:50.099 }, 00:17:50.099 "peer_address": { 00:17:50.099 "trtype": "TCP", 00:17:50.099 "adrfam": "IPv4", 00:17:50.099 "traddr": "10.0.0.1", 00:17:50.099 "trsvcid": "55610" 00:17:50.099 }, 00:17:50.099 "auth": { 00:17:50.099 "state": "completed", 00:17:50.099 "digest": "sha512", 00:17:50.099 "dhgroup": "null" 00:17:50.099 } 00:17:50.099 } 00:17:50.099 ]' 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.099 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.360 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:50.360 07:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:50.932 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.932 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.932 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.932 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.932 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.932 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.932 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.932 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.194 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.455 00:17:51.455 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.455 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.455 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.455 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.717 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.717 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.717 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.717 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.717 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.717 { 00:17:51.717 "cntlid": 99, 00:17:51.717 "qid": 0, 00:17:51.717 "state": "enabled", 00:17:51.717 "thread": "nvmf_tgt_poll_group_000", 00:17:51.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.717 "listen_address": { 00:17:51.717 "trtype": "TCP", 00:17:51.717 "adrfam": "IPv4", 00:17:51.717 "traddr": "10.0.0.2", 00:17:51.717 "trsvcid": "4420" 00:17:51.717 }, 00:17:51.717 "peer_address": { 00:17:51.717 "trtype": "TCP", 00:17:51.717 "adrfam": "IPv4", 00:17:51.717 "traddr": "10.0.0.1", 00:17:51.717 "trsvcid": "55636" 00:17:51.717 }, 00:17:51.717 "auth": { 00:17:51.717 "state": "completed", 00:17:51.717 "digest": "sha512", 00:17:51.717 "dhgroup": "null" 00:17:51.717 } 00:17:51.717 } 00:17:51.717 ]' 00:17:51.717 07:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.717 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.717 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.717 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:51.717 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.717 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.717 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.717 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.978 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:51.978 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:52.550 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.550 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.550 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.550 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.550 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.550 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.550 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.550 07:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.811 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.812 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.812 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.073 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.073 { 00:17:53.073 "cntlid": 101, 00:17:53.073 "qid": 0, 00:17:53.073 "state": "enabled", 00:17:53.073 "thread": "nvmf_tgt_poll_group_000", 00:17:53.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.073 "listen_address": { 00:17:53.073 "trtype": "TCP", 00:17:53.073 "adrfam": "IPv4", 00:17:53.073 "traddr": "10.0.0.2", 00:17:53.073 "trsvcid": "4420" 00:17:53.073 }, 00:17:53.073 "peer_address": { 00:17:53.073 "trtype": "TCP", 00:17:53.073 "adrfam": "IPv4", 00:17:53.073 "traddr": "10.0.0.1", 00:17:53.073 "trsvcid": "33210" 00:17:53.073 }, 00:17:53.073 "auth": { 00:17:53.073 "state": "completed", 00:17:53.073 "digest": "sha512", 00:17:53.073 "dhgroup": "null" 00:17:53.073 } 00:17:53.073 } 00:17:53.073 ]' 00:17:53.073 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.334 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.334 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.334 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:53.334 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.334 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.334 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.334 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.595 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:53.596 07:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:17:54.167 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.167 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.167 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.167 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.167 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.167 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.167 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.167 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.429 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.690 00:17:54.690 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.690 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.690 07:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.690 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.690 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.690 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.690 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.691 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.691 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.691 { 00:17:54.691 "cntlid": 103, 00:17:54.691 "qid": 0, 00:17:54.691 "state": "enabled", 00:17:54.691 "thread": "nvmf_tgt_poll_group_000", 00:17:54.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.691 "listen_address": { 00:17:54.691 "trtype": "TCP", 00:17:54.691 "adrfam": "IPv4", 00:17:54.691 "traddr": "10.0.0.2", 00:17:54.691 "trsvcid": "4420" 00:17:54.691 }, 00:17:54.691 "peer_address": { 00:17:54.691 "trtype": "TCP", 00:17:54.691 "adrfam": "IPv4", 00:17:54.691 "traddr": "10.0.0.1", 00:17:54.691 "trsvcid": "33244" 00:17:54.691 }, 00:17:54.691 "auth": { 00:17:54.691 "state": "completed", 00:17:54.691 "digest": "sha512", 00:17:54.691 "dhgroup": "null" 00:17:54.691 } 00:17:54.691 } 00:17:54.691 ]' 00:17:54.691 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.951 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.951 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.951 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:54.951 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.951 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.951 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.951 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.212 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:55.212 07:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:17:55.783 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.783 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.783 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.783 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.783 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.783 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.783 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.783 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.783 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.044 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.045 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.045 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.307 { 00:17:56.307 "cntlid": 105, 00:17:56.307 "qid": 0, 00:17:56.307 "state": "enabled", 00:17:56.307 "thread": "nvmf_tgt_poll_group_000", 00:17:56.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.307 "listen_address": { 00:17:56.307 "trtype": "TCP", 00:17:56.307 "adrfam": "IPv4", 00:17:56.307 "traddr": "10.0.0.2", 00:17:56.307 "trsvcid": "4420" 00:17:56.307 }, 00:17:56.307 "peer_address": { 00:17:56.307 "trtype": "TCP", 00:17:56.307 "adrfam": "IPv4", 00:17:56.307 "traddr": "10.0.0.1", 00:17:56.307 "trsvcid": "33276" 00:17:56.307 }, 00:17:56.307 "auth": { 00:17:56.307 "state": "completed", 00:17:56.307 "digest": "sha512", 00:17:56.307 "dhgroup": "ffdhe2048" 00:17:56.307 } 00:17:56.307 } 00:17:56.307 ]' 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.307 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.568 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.568 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.568 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.568 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.568 07:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.829 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:56.829 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:17:57.400 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.400 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.400 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.400 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.400 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.400 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.400 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.400 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.660 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:57.660 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.660 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.660 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.660 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.661 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.661 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.661 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.661 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.661 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.661 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.661 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.661 07:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.944 00:17:57.944 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.944 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.944 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.944 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.944 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.944 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.945 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.945 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.945 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.945 { 00:17:57.945 "cntlid": 107, 00:17:57.945 "qid": 0, 00:17:57.945 "state": "enabled", 00:17:57.945 "thread": "nvmf_tgt_poll_group_000", 00:17:57.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.945 "listen_address": { 00:17:57.945 "trtype": "TCP", 00:17:57.945 "adrfam": "IPv4", 00:17:57.945 "traddr": "10.0.0.2", 00:17:57.945 "trsvcid": "4420" 00:17:57.945 }, 00:17:57.945 "peer_address": { 00:17:57.945 "trtype": "TCP", 00:17:57.945 "adrfam": "IPv4", 00:17:57.945 "traddr": "10.0.0.1", 00:17:57.945 "trsvcid": "33314" 00:17:57.945 }, 00:17:57.945 "auth": { 00:17:57.945 "state": "completed", 00:17:57.945 "digest": "sha512", 00:17:57.945 "dhgroup": "ffdhe2048" 00:17:57.945 } 00:17:57.945 } 00:17:57.945 ]' 00:17:57.945 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.207 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.207 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.207 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.207 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.207 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.207 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.207 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.467 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:58.467 07:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:17:59.037 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.038 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.038 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.038 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.038 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.038 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.038 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.038 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.299 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.560 00:17:59.560 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.560 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.560 07:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.560 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.560 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.560 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.560 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.560 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.560 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.560 { 00:17:59.560 "cntlid": 109, 00:17:59.560 "qid": 0, 00:17:59.560 "state": "enabled", 00:17:59.560 "thread": "nvmf_tgt_poll_group_000", 00:17:59.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.560 "listen_address": { 00:17:59.560 "trtype": "TCP", 00:17:59.560 "adrfam": "IPv4", 00:17:59.560 "traddr": "10.0.0.2", 00:17:59.560 "trsvcid": "4420" 00:17:59.560 }, 00:17:59.560 "peer_address": { 00:17:59.560 "trtype": "TCP", 00:17:59.560 "adrfam": "IPv4", 00:17:59.560 "traddr": "10.0.0.1", 00:17:59.560 "trsvcid": "33336" 00:17:59.560 }, 00:17:59.560 "auth": { 00:17:59.560 "state": "completed", 00:17:59.560 "digest": "sha512", 00:17:59.560 "dhgroup": "ffdhe2048" 00:17:59.560 } 00:17:59.560 } 00:17:59.560 ]' 00:17:59.560 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.821 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.821 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.821 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.821 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.821 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.821 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.821 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.081 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:00.081 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:00.651 07:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.651 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.651 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.651 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.651 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.651 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.651 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.651 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.911 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.171 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.171 { 00:18:01.171 "cntlid": 111, 00:18:01.171 "qid": 0, 00:18:01.171 "state": "enabled", 00:18:01.171 "thread": "nvmf_tgt_poll_group_000", 00:18:01.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.171 "listen_address": { 00:18:01.171 "trtype": "TCP", 00:18:01.171 "adrfam": "IPv4", 00:18:01.171 "traddr": "10.0.0.2", 00:18:01.171 "trsvcid": "4420" 00:18:01.171 }, 00:18:01.171 "peer_address": { 00:18:01.171 "trtype": "TCP", 00:18:01.171 "adrfam": "IPv4", 00:18:01.171 "traddr": "10.0.0.1", 00:18:01.171 "trsvcid": "33362" 00:18:01.171 }, 00:18:01.171 "auth": { 00:18:01.171 "state": "completed", 00:18:01.171 "digest": "sha512", 00:18:01.171 "dhgroup": "ffdhe2048" 00:18:01.171 } 00:18:01.171 } 00:18:01.171 ]' 00:18:01.171 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.495 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.495 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.495 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.495 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.495 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.495 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.495 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.495 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:01.496 07:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.433 07:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.694 00:18:02.694 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.694 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.694 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.954 { 00:18:02.954 "cntlid": 113, 00:18:02.954 "qid": 0, 00:18:02.954 "state": "enabled", 00:18:02.954 "thread": "nvmf_tgt_poll_group_000", 00:18:02.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.954 "listen_address": { 00:18:02.954 "trtype": "TCP", 00:18:02.954 "adrfam": "IPv4", 00:18:02.954 "traddr": "10.0.0.2", 00:18:02.954 "trsvcid": "4420" 00:18:02.954 }, 00:18:02.954 "peer_address": { 00:18:02.954 "trtype": "TCP", 00:18:02.954 "adrfam": "IPv4", 00:18:02.954 "traddr": "10.0.0.1", 00:18:02.954 "trsvcid": "56930" 00:18:02.954 }, 00:18:02.954 "auth": { 00:18:02.954 "state": "completed", 00:18:02.954 "digest": "sha512", 00:18:02.954 "dhgroup": "ffdhe3072" 00:18:02.954 } 00:18:02.954 } 00:18:02.954 ]' 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.954 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.955 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.955 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.955 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.955 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.215 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:03.215 07:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:03.786 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.786 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.786 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.786 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.786 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.786 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.786 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.786 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.047 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.308 00:18:04.308 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.308 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.308 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.569 { 00:18:04.569 "cntlid": 115, 00:18:04.569 "qid": 0, 00:18:04.569 "state": "enabled", 00:18:04.569 "thread": "nvmf_tgt_poll_group_000", 00:18:04.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.569 "listen_address": { 00:18:04.569 "trtype": "TCP", 00:18:04.569 "adrfam": "IPv4", 00:18:04.569 "traddr": "10.0.0.2", 00:18:04.569 "trsvcid": "4420" 00:18:04.569 }, 00:18:04.569 "peer_address": { 00:18:04.569 "trtype": "TCP", 00:18:04.569 "adrfam": "IPv4", 00:18:04.569 "traddr": "10.0.0.1", 00:18:04.569 "trsvcid": "56966" 00:18:04.569 }, 00:18:04.569 "auth": { 00:18:04.569 "state": "completed", 00:18:04.569 "digest": "sha512", 00:18:04.569 "dhgroup": "ffdhe3072" 00:18:04.569 } 00:18:04.569 } 00:18:04.569 ]' 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.569 07:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.569 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.569 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.569 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.829 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:18:04.829 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:18:05.400 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.400 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.400 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.400 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.400 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.400 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.400 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.400 07:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.661 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.921 00:18:05.921 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.921 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.921 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.181 { 00:18:06.181 "cntlid": 117, 00:18:06.181 "qid": 0, 00:18:06.181 "state": "enabled", 00:18:06.181 "thread": "nvmf_tgt_poll_group_000", 00:18:06.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.181 "listen_address": { 00:18:06.181 "trtype": "TCP", 00:18:06.181 "adrfam": "IPv4", 00:18:06.181 "traddr": "10.0.0.2", 00:18:06.181 "trsvcid": "4420" 00:18:06.181 }, 00:18:06.181 "peer_address": { 00:18:06.181 "trtype": "TCP", 00:18:06.181 "adrfam": "IPv4", 00:18:06.181 "traddr": "10.0.0.1", 00:18:06.181 "trsvcid": "56998" 00:18:06.181 }, 00:18:06.181 "auth": { 00:18:06.181 "state": "completed", 00:18:06.181 "digest": "sha512", 00:18:06.181 "dhgroup": "ffdhe3072" 00:18:06.181 } 00:18:06.181 } 00:18:06.181 ]' 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.181 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.442 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:06.442 07:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:07.014 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.014 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.014 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.014 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.276 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.538 00:18:07.538 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.538 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.538 07:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.800 { 00:18:07.800 "cntlid": 119, 00:18:07.800 "qid": 0, 00:18:07.800 "state": "enabled", 00:18:07.800 "thread": "nvmf_tgt_poll_group_000", 00:18:07.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.800 "listen_address": { 00:18:07.800 "trtype": "TCP", 00:18:07.800 "adrfam": "IPv4", 00:18:07.800 "traddr": "10.0.0.2", 00:18:07.800 "trsvcid": "4420" 00:18:07.800 }, 00:18:07.800 "peer_address": { 00:18:07.800 "trtype": "TCP", 00:18:07.800 "adrfam": "IPv4", 00:18:07.800 "traddr": "10.0.0.1", 00:18:07.800 "trsvcid": "57012" 00:18:07.800 }, 00:18:07.800 "auth": { 00:18:07.800 "state": "completed", 00:18:07.800 "digest": "sha512", 00:18:07.800 "dhgroup": "ffdhe3072" 00:18:07.800 } 00:18:07.800 } 00:18:07.800 ]' 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.800 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.060 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:08.061 07:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:08.632 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.632 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.632 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.632 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.894 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.155 00:18:09.155 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.155 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.155 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.416 { 00:18:09.416 "cntlid": 121, 00:18:09.416 "qid": 0, 00:18:09.416 "state": "enabled", 00:18:09.416 "thread": "nvmf_tgt_poll_group_000", 00:18:09.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.416 "listen_address": { 00:18:09.416 "trtype": "TCP", 00:18:09.416 "adrfam": "IPv4", 00:18:09.416 "traddr": "10.0.0.2", 00:18:09.416 "trsvcid": "4420" 00:18:09.416 }, 00:18:09.416 "peer_address": { 00:18:09.416 "trtype": "TCP", 00:18:09.416 "adrfam": "IPv4", 00:18:09.416 "traddr": "10.0.0.1", 00:18:09.416 "trsvcid": "57032" 00:18:09.416 }, 00:18:09.416 "auth": { 00:18:09.416 "state": "completed", 00:18:09.416 "digest": "sha512", 00:18:09.416 "dhgroup": "ffdhe4096" 00:18:09.416 } 00:18:09.416 } 00:18:09.416 ]' 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.416 07:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.677 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:09.677 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:10.248 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.248 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.248 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.248 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.248 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.248 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.248 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.248 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.509 07:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.769 00:18:10.769 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.769 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.769 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.030 { 00:18:11.030 "cntlid": 123, 00:18:11.030 "qid": 0, 00:18:11.030 "state": "enabled", 00:18:11.030 "thread": "nvmf_tgt_poll_group_000", 00:18:11.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.030 "listen_address": { 00:18:11.030 "trtype": "TCP", 00:18:11.030 "adrfam": "IPv4", 00:18:11.030 "traddr": "10.0.0.2", 00:18:11.030 "trsvcid": "4420" 00:18:11.030 }, 00:18:11.030 "peer_address": { 00:18:11.030 "trtype": "TCP", 00:18:11.030 "adrfam": "IPv4", 00:18:11.030 "traddr": "10.0.0.1", 00:18:11.030 "trsvcid": "57064" 00:18:11.030 }, 00:18:11.030 "auth": { 00:18:11.030 "state": "completed", 00:18:11.030 "digest": "sha512", 00:18:11.030 "dhgroup": "ffdhe4096" 00:18:11.030 } 00:18:11.030 } 00:18:11.030 ]' 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.030 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.291 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.291 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.291 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.291 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:18:11.291 07:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.241 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.520 00:18:12.520 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.520 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.520 07:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.816 { 00:18:12.816 "cntlid": 125, 00:18:12.816 "qid": 0, 00:18:12.816 "state": "enabled", 00:18:12.816 "thread": "nvmf_tgt_poll_group_000", 00:18:12.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.816 "listen_address": { 00:18:12.816 "trtype": "TCP", 00:18:12.816 "adrfam": "IPv4", 00:18:12.816 "traddr": "10.0.0.2", 00:18:12.816 "trsvcid": "4420" 00:18:12.816 }, 00:18:12.816 "peer_address": { 00:18:12.816 "trtype": "TCP", 00:18:12.816 "adrfam": "IPv4", 00:18:12.816 "traddr": "10.0.0.1", 00:18:12.816 "trsvcid": "40682" 00:18:12.816 }, 00:18:12.816 "auth": { 00:18:12.816 "state": "completed", 00:18:12.816 "digest": "sha512", 00:18:12.816 "dhgroup": "ffdhe4096" 00:18:12.816 } 00:18:12.816 } 00:18:12.816 ]' 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.816 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.076 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:13.076 07:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:13.648 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.648 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.648 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.648 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.648 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.648 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.648 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.648 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.908 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:13.908 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.908 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.908 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:13.909 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:13.909 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.909 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:13.909 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.909 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.909 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.909 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:13.909 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.909 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.169 00:18:14.169 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.169 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.169 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.429 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.429 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.429 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.429 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.429 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.429 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.429 { 00:18:14.430 "cntlid": 127, 00:18:14.430 "qid": 0, 00:18:14.430 "state": "enabled", 00:18:14.430 "thread": "nvmf_tgt_poll_group_000", 00:18:14.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.430 "listen_address": { 00:18:14.430 "trtype": "TCP", 00:18:14.430 "adrfam": "IPv4", 00:18:14.430 "traddr": "10.0.0.2", 00:18:14.430 "trsvcid": "4420" 00:18:14.430 }, 00:18:14.430 "peer_address": { 00:18:14.430 "trtype": "TCP", 00:18:14.430 "adrfam": "IPv4", 00:18:14.430 "traddr": "10.0.0.1", 00:18:14.430 "trsvcid": "40724" 00:18:14.430 }, 00:18:14.430 "auth": { 00:18:14.430 "state": "completed", 00:18:14.430 "digest": "sha512", 00:18:14.430 "dhgroup": "ffdhe4096" 00:18:14.430 } 00:18:14.430 } 00:18:14.430 ]' 00:18:14.430 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.430 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.430 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.430 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.430 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.430 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.430 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.430 07:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.690 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:14.690 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:15.261 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.261 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.261 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.261 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.261 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.261 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.261 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.261 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.261 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.522 07:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.784 00:18:15.784 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.784 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.784 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.045 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.045 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.045 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.045 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.045 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.045 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.045 { 00:18:16.045 "cntlid": 129, 00:18:16.045 "qid": 0, 00:18:16.045 "state": "enabled", 00:18:16.045 "thread": "nvmf_tgt_poll_group_000", 00:18:16.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.045 "listen_address": { 00:18:16.045 "trtype": "TCP", 00:18:16.045 "adrfam": "IPv4", 00:18:16.045 "traddr": "10.0.0.2", 00:18:16.045 "trsvcid": "4420" 00:18:16.045 }, 00:18:16.045 "peer_address": { 00:18:16.045 "trtype": "TCP", 00:18:16.045 "adrfam": "IPv4", 00:18:16.045 "traddr": "10.0.0.1", 00:18:16.045 "trsvcid": "40758" 00:18:16.045 }, 00:18:16.045 "auth": { 00:18:16.046 "state": "completed", 00:18:16.046 "digest": "sha512", 00:18:16.046 "dhgroup": "ffdhe6144" 00:18:16.046 } 00:18:16.046 } 00:18:16.046 ]' 00:18:16.046 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.046 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.046 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.046 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.046 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.046 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.046 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.046 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.307 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:16.308 07:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:16.880 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.880 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.880 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.880 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.880 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.880 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.880 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.880 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.141 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.402 00:18:17.402 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.402 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.402 07:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.664 { 00:18:17.664 "cntlid": 131, 00:18:17.664 "qid": 0, 00:18:17.664 "state": "enabled", 00:18:17.664 "thread": "nvmf_tgt_poll_group_000", 00:18:17.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.664 "listen_address": { 00:18:17.664 "trtype": "TCP", 00:18:17.664 "adrfam": "IPv4", 00:18:17.664 "traddr": "10.0.0.2", 00:18:17.664 "trsvcid": "4420" 00:18:17.664 }, 00:18:17.664 "peer_address": { 00:18:17.664 "trtype": "TCP", 00:18:17.664 "adrfam": "IPv4", 00:18:17.664 "traddr": "10.0.0.1", 00:18:17.664 "trsvcid": "40794" 00:18:17.664 }, 00:18:17.664 "auth": { 00:18:17.664 "state": "completed", 00:18:17.664 "digest": "sha512", 00:18:17.664 "dhgroup": "ffdhe6144" 00:18:17.664 } 00:18:17.664 } 00:18:17.664 ]' 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.664 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.925 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.925 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.925 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.926 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:18:17.926 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:18:18.866 07:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.867 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.129 00:18:19.129 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.129 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.129 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.391 { 00:18:19.391 "cntlid": 133, 00:18:19.391 "qid": 0, 00:18:19.391 "state": "enabled", 00:18:19.391 "thread": "nvmf_tgt_poll_group_000", 00:18:19.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.391 "listen_address": { 00:18:19.391 "trtype": "TCP", 00:18:19.391 "adrfam": "IPv4", 00:18:19.391 "traddr": "10.0.0.2", 00:18:19.391 "trsvcid": "4420" 00:18:19.391 }, 00:18:19.391 "peer_address": { 00:18:19.391 "trtype": "TCP", 00:18:19.391 "adrfam": "IPv4", 00:18:19.391 "traddr": "10.0.0.1", 00:18:19.391 "trsvcid": "40830" 00:18:19.391 }, 00:18:19.391 "auth": { 00:18:19.391 "state": "completed", 00:18:19.391 "digest": "sha512", 00:18:19.391 "dhgroup": "ffdhe6144" 00:18:19.391 } 00:18:19.391 } 00:18:19.391 ]' 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.391 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.653 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.653 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.653 07:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.653 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:19.653 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:20.224 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.485 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.486 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.486 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.486 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.486 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.486 07:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.058 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.058 { 00:18:21.058 "cntlid": 135, 00:18:21.058 "qid": 0, 00:18:21.058 "state": "enabled", 00:18:21.058 "thread": "nvmf_tgt_poll_group_000", 00:18:21.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.058 "listen_address": { 00:18:21.058 "trtype": "TCP", 00:18:21.058 "adrfam": "IPv4", 00:18:21.058 "traddr": "10.0.0.2", 00:18:21.058 "trsvcid": "4420" 00:18:21.058 }, 00:18:21.058 "peer_address": { 00:18:21.058 "trtype": "TCP", 00:18:21.058 "adrfam": "IPv4", 00:18:21.058 "traddr": "10.0.0.1", 00:18:21.058 "trsvcid": "40862" 00:18:21.058 }, 00:18:21.058 "auth": { 00:18:21.058 "state": "completed", 00:18:21.058 "digest": "sha512", 00:18:21.058 "dhgroup": "ffdhe6144" 00:18:21.058 } 00:18:21.058 } 00:18:21.058 ]' 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.058 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.320 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.320 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.320 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.320 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.320 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.321 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:21.321 07:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.263 07:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.837 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.837 { 00:18:22.837 "cntlid": 137, 00:18:22.837 "qid": 0, 00:18:22.837 "state": "enabled", 00:18:22.837 "thread": "nvmf_tgt_poll_group_000", 00:18:22.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.837 "listen_address": { 00:18:22.837 "trtype": "TCP", 00:18:22.837 "adrfam": "IPv4", 00:18:22.837 "traddr": "10.0.0.2", 00:18:22.837 "trsvcid": "4420" 00:18:22.837 }, 00:18:22.837 "peer_address": { 00:18:22.837 "trtype": "TCP", 00:18:22.837 "adrfam": "IPv4", 00:18:22.837 "traddr": "10.0.0.1", 00:18:22.837 "trsvcid": "41366" 00:18:22.837 }, 00:18:22.837 "auth": { 00:18:22.837 "state": "completed", 00:18:22.837 "digest": "sha512", 00:18:22.837 "dhgroup": "ffdhe8192" 00:18:22.837 } 00:18:22.837 } 00:18:22.837 ]' 00:18:22.837 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.098 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.098 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.098 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.098 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.098 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.098 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.098 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.360 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:23.360 07:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:23.929 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.929 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.929 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.929 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.929 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.929 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.929 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.929 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.190 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.451 00:18:24.713 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.713 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.713 07:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.713 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.713 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.713 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.713 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.713 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.713 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.713 { 00:18:24.713 "cntlid": 139, 00:18:24.713 "qid": 0, 00:18:24.713 "state": "enabled", 00:18:24.713 "thread": "nvmf_tgt_poll_group_000", 00:18:24.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.713 "listen_address": { 00:18:24.713 "trtype": "TCP", 00:18:24.713 "adrfam": "IPv4", 00:18:24.713 "traddr": "10.0.0.2", 00:18:24.713 "trsvcid": "4420" 00:18:24.713 }, 00:18:24.713 "peer_address": { 00:18:24.713 "trtype": "TCP", 00:18:24.713 "adrfam": "IPv4", 00:18:24.713 "traddr": "10.0.0.1", 00:18:24.713 "trsvcid": "41396" 00:18:24.713 }, 00:18:24.713 "auth": { 00:18:24.713 "state": "completed", 00:18:24.713 "digest": "sha512", 00:18:24.713 "dhgroup": "ffdhe8192" 00:18:24.713 } 00:18:24.713 } 00:18:24.713 ]' 00:18:24.713 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.974 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.974 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.974 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.974 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.974 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.974 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.974 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.234 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:18:25.235 07:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: --dhchap-ctrl-secret DHHC-1:02:Njc3OGU4M2U0ZTk1NTNmM2RjZjZkNGNlYmI1YmQyNTliNmYzMTczN2ViYTAyMjRhNWKlNw==: 00:18:25.805 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.805 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.805 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.805 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.805 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.805 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.805 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.805 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.065 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:26.065 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.065 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.066 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.326 00:18:26.586 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.586 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.586 07:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.586 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.586 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.586 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.586 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.586 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.586 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.586 { 00:18:26.586 "cntlid": 141, 00:18:26.586 "qid": 0, 00:18:26.586 "state": "enabled", 00:18:26.586 "thread": "nvmf_tgt_poll_group_000", 00:18:26.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.586 "listen_address": { 00:18:26.586 "trtype": "TCP", 00:18:26.586 "adrfam": "IPv4", 00:18:26.586 "traddr": "10.0.0.2", 00:18:26.586 "trsvcid": "4420" 00:18:26.586 }, 00:18:26.586 "peer_address": { 00:18:26.587 "trtype": "TCP", 00:18:26.587 "adrfam": "IPv4", 00:18:26.587 "traddr": "10.0.0.1", 00:18:26.587 "trsvcid": "41430" 00:18:26.587 }, 00:18:26.587 "auth": { 00:18:26.587 "state": "completed", 00:18:26.587 "digest": "sha512", 00:18:26.587 "dhgroup": "ffdhe8192" 00:18:26.587 } 00:18:26.587 } 00:18:26.587 ]' 00:18:26.587 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.847 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.847 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.847 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.847 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.847 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.847 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.847 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.847 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:26.847 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:01:NGMyMWU1YWEyMWYxYmE4ZWQwMzE0MzNiMWZlOWYzNmGduDOR: 00:18:27.791 07:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.791 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.361 00:18:28.361 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.361 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.361 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.361 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.361 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.361 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.361 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.622 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.622 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.622 { 00:18:28.622 "cntlid": 143, 00:18:28.622 "qid": 0, 00:18:28.622 "state": "enabled", 00:18:28.622 "thread": "nvmf_tgt_poll_group_000", 00:18:28.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.622 "listen_address": { 00:18:28.622 "trtype": "TCP", 00:18:28.622 "adrfam": "IPv4", 00:18:28.622 "traddr": "10.0.0.2", 00:18:28.622 "trsvcid": "4420" 00:18:28.622 }, 00:18:28.622 "peer_address": { 00:18:28.622 "trtype": "TCP", 00:18:28.622 "adrfam": "IPv4", 00:18:28.622 "traddr": "10.0.0.1", 00:18:28.622 "trsvcid": "41462" 00:18:28.622 }, 00:18:28.622 "auth": { 00:18:28.622 "state": "completed", 00:18:28.622 "digest": "sha512", 00:18:28.622 "dhgroup": "ffdhe8192" 00:18:28.622 } 00:18:28.622 } 00:18:28.622 ]' 00:18:28.622 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.622 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.622 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.622 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.622 07:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.622 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.622 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.622 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.883 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:28.883 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:29.454 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.454 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.455 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.455 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.455 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.455 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:29.455 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:29.455 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:29.455 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.455 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.455 07:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.715 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.716 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.716 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.286 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.286 { 00:18:30.286 "cntlid": 145, 00:18:30.286 "qid": 0, 00:18:30.286 "state": "enabled", 00:18:30.286 "thread": "nvmf_tgt_poll_group_000", 00:18:30.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.286 "listen_address": { 00:18:30.286 "trtype": "TCP", 00:18:30.286 "adrfam": "IPv4", 00:18:30.286 "traddr": "10.0.0.2", 00:18:30.286 "trsvcid": "4420" 00:18:30.286 }, 00:18:30.286 "peer_address": { 00:18:30.286 "trtype": "TCP", 00:18:30.286 "adrfam": "IPv4", 00:18:30.286 "traddr": "10.0.0.1", 00:18:30.286 "trsvcid": "41486" 00:18:30.286 }, 00:18:30.286 "auth": { 00:18:30.286 "state": "completed", 00:18:30.286 "digest": "sha512", 00:18:30.286 "dhgroup": "ffdhe8192" 00:18:30.286 } 00:18:30.286 } 00:18:30.286 ]' 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.286 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.547 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.547 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.547 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.547 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.547 07:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.547 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:30.547 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:OWRlMjBlNGJkNDIwMzdjY2E4ODI2MWVhYTFiMmJlZjgyMTNkMDIyZjMwZWI3ZTAxBcGefQ==: --dhchap-ctrl-secret DHHC-1:03:MTU5OTE3M2RlMThkYTBmMzdkMGFlOTYzZTY3YmU3Mjk3ZGIxZmY2NzMzYTMzNjU5MTUzNDMyNjhhNjJkM2RhMR3Lg04=: 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:31.492 07:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:31.753 request: 00:18:31.753 { 00:18:31.753 "name": "nvme0", 00:18:31.753 "trtype": "tcp", 00:18:31.753 "traddr": "10.0.0.2", 00:18:31.753 "adrfam": "ipv4", 00:18:31.753 "trsvcid": "4420", 00:18:31.753 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.753 "prchk_reftag": false, 00:18:31.753 "prchk_guard": false, 00:18:31.753 "hdgst": false, 00:18:31.753 "ddgst": false, 00:18:31.753 "dhchap_key": "key2", 00:18:31.753 "allow_unrecognized_csi": false, 00:18:31.753 "method": "bdev_nvme_attach_controller", 00:18:31.753 "req_id": 1 00:18:31.753 } 00:18:31.753 Got JSON-RPC error response 00:18:31.753 response: 00:18:31.753 { 00:18:31.753 "code": -5, 00:18:31.753 "message": "Input/output error" 00:18:31.753 } 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.753 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.438 request: 00:18:32.438 { 00:18:32.438 "name": "nvme0", 00:18:32.438 "trtype": "tcp", 00:18:32.438 "traddr": "10.0.0.2", 00:18:32.438 "adrfam": "ipv4", 00:18:32.438 "trsvcid": "4420", 00:18:32.438 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.438 "prchk_reftag": false, 00:18:32.438 "prchk_guard": false, 00:18:32.438 "hdgst": false, 00:18:32.438 "ddgst": false, 00:18:32.438 "dhchap_key": "key1", 00:18:32.438 "dhchap_ctrlr_key": "ckey2", 00:18:32.438 "allow_unrecognized_csi": false, 00:18:32.438 "method": "bdev_nvme_attach_controller", 00:18:32.438 "req_id": 1 00:18:32.438 } 00:18:32.438 Got JSON-RPC error response 00:18:32.438 response: 00:18:32.438 { 00:18:32.438 "code": -5, 00:18:32.438 "message": "Input/output error" 00:18:32.438 } 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.438 07:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.700 request: 00:18:32.700 { 00:18:32.700 "name": "nvme0", 00:18:32.700 "trtype": "tcp", 00:18:32.700 "traddr": "10.0.0.2", 00:18:32.700 "adrfam": "ipv4", 00:18:32.700 "trsvcid": "4420", 00:18:32.700 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.700 "prchk_reftag": false, 00:18:32.700 "prchk_guard": false, 00:18:32.700 "hdgst": false, 00:18:32.700 "ddgst": false, 00:18:32.700 "dhchap_key": "key1", 00:18:32.700 "dhchap_ctrlr_key": "ckey1", 00:18:32.700 "allow_unrecognized_csi": false, 00:18:32.700 "method": "bdev_nvme_attach_controller", 00:18:32.700 "req_id": 1 00:18:32.700 } 00:18:32.700 Got JSON-RPC error response 00:18:32.700 response: 00:18:32.700 { 00:18:32.700 "code": -5, 00:18:32.700 "message": "Input/output error" 00:18:32.700 } 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3099244 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3099244 ']' 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3099244 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3099244 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3099244' 00:18:32.700 killing process with pid 3099244 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3099244 00:18:32.700 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3099244 00:18:32.960 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:32.960 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:32.960 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3125553 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3125553 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3125553 ']' 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.961 07:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3125553 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3125553 ']' 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.902 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 null0 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cw9 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0Wg ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0Wg 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UWm 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.7eZ ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7eZ 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9X0 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.nwo ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nwo 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1UP 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.164 07:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.105 nvme0n1 00:18:35.105 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.105 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.105 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.105 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.105 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.105 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.105 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.105 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.105 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.105 { 00:18:35.105 "cntlid": 1, 00:18:35.105 "qid": 0, 00:18:35.105 "state": "enabled", 00:18:35.105 "thread": "nvmf_tgt_poll_group_000", 00:18:35.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.106 "listen_address": { 00:18:35.106 "trtype": "TCP", 00:18:35.106 "adrfam": "IPv4", 00:18:35.106 "traddr": "10.0.0.2", 00:18:35.106 "trsvcid": "4420" 00:18:35.106 }, 00:18:35.106 "peer_address": { 00:18:35.106 "trtype": "TCP", 00:18:35.106 "adrfam": "IPv4", 00:18:35.106 "traddr": "10.0.0.1", 00:18:35.106 "trsvcid": "46044" 00:18:35.106 }, 00:18:35.106 "auth": { 00:18:35.106 "state": "completed", 00:18:35.106 "digest": "sha512", 00:18:35.106 "dhgroup": "ffdhe8192" 00:18:35.106 } 00:18:35.106 } 00:18:35.106 ]' 00:18:35.106 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.106 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.106 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.106 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.106 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.366 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.366 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.366 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.366 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:35.366 07:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.308 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.569 request: 00:18:36.569 { 00:18:36.569 "name": "nvme0", 00:18:36.569 "trtype": "tcp", 00:18:36.569 "traddr": "10.0.0.2", 00:18:36.569 "adrfam": "ipv4", 00:18:36.569 "trsvcid": "4420", 00:18:36.569 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.569 "prchk_reftag": false, 00:18:36.569 "prchk_guard": false, 00:18:36.569 "hdgst": false, 00:18:36.569 "ddgst": false, 00:18:36.569 "dhchap_key": "key3", 00:18:36.569 "allow_unrecognized_csi": false, 00:18:36.569 "method": "bdev_nvme_attach_controller", 00:18:36.569 "req_id": 1 00:18:36.569 } 00:18:36.569 Got JSON-RPC error response 00:18:36.569 response: 00:18:36.569 { 00:18:36.569 "code": -5, 00:18:36.569 "message": "Input/output error" 00:18:36.569 } 00:18:36.569 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:36.569 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.569 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.569 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.569 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:36.569 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:36.569 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:36.569 07:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.569 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.830 request: 00:18:36.830 { 00:18:36.830 "name": "nvme0", 00:18:36.830 "trtype": "tcp", 00:18:36.830 "traddr": "10.0.0.2", 00:18:36.830 "adrfam": "ipv4", 00:18:36.830 "trsvcid": "4420", 00:18:36.830 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.830 "prchk_reftag": false, 00:18:36.830 "prchk_guard": false, 00:18:36.830 "hdgst": false, 00:18:36.830 "ddgst": false, 00:18:36.830 "dhchap_key": "key3", 00:18:36.830 "allow_unrecognized_csi": false, 00:18:36.830 "method": "bdev_nvme_attach_controller", 00:18:36.830 "req_id": 1 00:18:36.830 } 00:18:36.830 Got JSON-RPC error response 00:18:36.830 response: 00:18:36.830 { 00:18:36.830 "code": -5, 00:18:36.830 "message": "Input/output error" 00:18:36.830 } 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.830 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.092 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.352 request: 00:18:37.353 { 00:18:37.353 "name": "nvme0", 00:18:37.353 "trtype": "tcp", 00:18:37.353 "traddr": "10.0.0.2", 00:18:37.353 "adrfam": "ipv4", 00:18:37.353 "trsvcid": "4420", 00:18:37.353 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.353 "prchk_reftag": false, 00:18:37.353 "prchk_guard": false, 00:18:37.353 "hdgst": false, 00:18:37.353 "ddgst": false, 00:18:37.353 "dhchap_key": "key0", 00:18:37.353 "dhchap_ctrlr_key": "key1", 00:18:37.353 "allow_unrecognized_csi": false, 00:18:37.353 "method": "bdev_nvme_attach_controller", 00:18:37.353 "req_id": 1 00:18:37.353 } 00:18:37.353 Got JSON-RPC error response 00:18:37.353 response: 00:18:37.353 { 00:18:37.353 "code": -5, 00:18:37.353 "message": "Input/output error" 00:18:37.353 } 00:18:37.353 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:37.353 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.353 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.353 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.353 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:37.353 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:37.353 07:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:37.613 nvme0n1 00:18:37.613 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:37.613 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:37.613 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:37.874 07:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:38.817 nvme0n1 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:38.817 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.077 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.077 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:39.077 07:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: --dhchap-ctrl-secret DHHC-1:03:N2QxY2M5MThiNmI4ZDc0N2FhZDJmZjc1NTg2YmE2ZjdmMmE3NjY5NTJlMjc4YWYyYWY4Y2U2OGI4NDU0NWViYyKK2VQ=: 00:18:39.649 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:39.649 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:39.649 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:39.650 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:39.650 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:39.650 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:39.650 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:39.650 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.650 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:39.911 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:40.483 request: 00:18:40.483 { 00:18:40.483 "name": "nvme0", 00:18:40.483 "trtype": "tcp", 00:18:40.483 "traddr": "10.0.0.2", 00:18:40.483 "adrfam": "ipv4", 00:18:40.483 "trsvcid": "4420", 00:18:40.483 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.483 "prchk_reftag": false, 00:18:40.483 "prchk_guard": false, 00:18:40.483 "hdgst": false, 00:18:40.483 "ddgst": false, 00:18:40.483 "dhchap_key": "key1", 00:18:40.483 "allow_unrecognized_csi": false, 00:18:40.483 "method": "bdev_nvme_attach_controller", 00:18:40.483 "req_id": 1 00:18:40.483 } 00:18:40.483 Got JSON-RPC error response 00:18:40.483 response: 00:18:40.483 { 00:18:40.483 "code": -5, 00:18:40.483 "message": "Input/output error" 00:18:40.483 } 00:18:40.483 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:40.483 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.483 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.483 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.483 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.483 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.483 07:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.054 nvme0n1 00:18:41.054 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:41.054 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:41.054 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.315 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.315 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.315 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.577 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.577 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.577 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.577 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.577 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:41.577 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:41.577 07:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:41.838 nvme0n1 00:18:41.838 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:41.838 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:41.838 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.838 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.838 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.838 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: '' 2s 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: ]] 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDdhOTI5YWRkZjUzZDI5ZmNkODkxOGZiNjk1ZmY4NTL+NxR6: 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:42.100 07:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: 2s 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: ]] 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjcwY2ZhODBjMThkMmYyOWIzNDcyMzdhNTA3MzdiOTk0MGNjNTg1YmM4NWY1OWYwXsw24w==: 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:44.646 07:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:46.560 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:46.560 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:46.560 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:46.560 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:46.560 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:46.560 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:46.560 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:46.560 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.560 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.561 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.561 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.561 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.561 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:46.561 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:46.561 07:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:47.132 nvme0n1 00:18:47.132 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.132 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.132 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.132 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.132 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.132 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.392 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:47.392 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:47.392 07:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.652 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.652 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.652 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.652 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.652 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.652 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:47.652 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:47.920 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:47.920 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:47.920 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.920 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.921 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.921 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.921 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.183 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.444 request: 00:18:48.444 { 00:18:48.444 "name": "nvme0", 00:18:48.444 "dhchap_key": "key1", 00:18:48.444 "dhchap_ctrlr_key": "key3", 00:18:48.444 "method": "bdev_nvme_set_keys", 00:18:48.444 "req_id": 1 00:18:48.444 } 00:18:48.444 Got JSON-RPC error response 00:18:48.444 response: 00:18:48.444 { 00:18:48.444 "code": -13, 00:18:48.444 "message": "Permission denied" 00:18:48.444 } 00:18:48.444 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:48.444 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.444 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.444 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.444 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:48.444 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.444 07:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:48.706 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:48.706 07:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:49.649 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:49.649 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:49.649 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.910 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:49.910 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:49.910 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.910 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.910 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.910 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:49.910 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:49.910 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:50.482 nvme0n1 00:18:50.743 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:50.744 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.744 07:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:50.744 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:51.004 request: 00:18:51.004 { 00:18:51.004 "name": "nvme0", 00:18:51.004 "dhchap_key": "key2", 00:18:51.004 "dhchap_ctrlr_key": "key0", 00:18:51.004 "method": "bdev_nvme_set_keys", 00:18:51.004 "req_id": 1 00:18:51.004 } 00:18:51.004 Got JSON-RPC error response 00:18:51.004 response: 00:18:51.004 { 00:18:51.004 "code": -13, 00:18:51.004 "message": "Permission denied" 00:18:51.004 } 00:18:51.004 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:51.004 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:51.004 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:51.004 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:51.004 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:51.004 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:51.004 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.265 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:51.265 07:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:52.208 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:52.208 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:52.208 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.473 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3099357 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3099357 ']' 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3099357 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3099357 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3099357' 00:18:52.474 killing process with pid 3099357 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3099357 00:18:52.474 07:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3099357 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:52.781 rmmod nvme_tcp 00:18:52.781 rmmod nvme_fabrics 00:18:52.781 rmmod nvme_keyring 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 3125553 ']' 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 3125553 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3125553 ']' 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3125553 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3125553 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3125553' 00:18:52.781 killing process with pid 3125553 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3125553 00:18:52.781 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3125553 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.093 07:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cw9 /tmp/spdk.key-sha256.UWm /tmp/spdk.key-sha384.9X0 /tmp/spdk.key-sha512.1UP /tmp/spdk.key-sha512.0Wg /tmp/spdk.key-sha384.7eZ /tmp/spdk.key-sha256.nwo '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:55.007 00:18:55.007 real 2m37.175s 00:18:55.007 user 5m53.646s 00:18:55.007 sys 0m24.929s 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.007 ************************************ 00:18:55.007 END TEST nvmf_auth_target 00:18:55.007 ************************************ 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.007 ************************************ 00:18:55.007 START TEST nvmf_bdevio_no_huge 00:18:55.007 ************************************ 00:18:55.007 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:55.268 * Looking for test storage... 00:18:55.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:55.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.268 --rc genhtml_branch_coverage=1 00:18:55.268 --rc genhtml_function_coverage=1 00:18:55.268 --rc genhtml_legend=1 00:18:55.268 --rc geninfo_all_blocks=1 00:18:55.268 --rc geninfo_unexecuted_blocks=1 00:18:55.268 00:18:55.268 ' 00:18:55.268 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:55.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.269 --rc genhtml_branch_coverage=1 00:18:55.269 --rc genhtml_function_coverage=1 00:18:55.269 --rc genhtml_legend=1 00:18:55.269 --rc geninfo_all_blocks=1 00:18:55.269 --rc geninfo_unexecuted_blocks=1 00:18:55.269 00:18:55.269 ' 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:55.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.269 --rc genhtml_branch_coverage=1 00:18:55.269 --rc genhtml_function_coverage=1 00:18:55.269 --rc genhtml_legend=1 00:18:55.269 --rc geninfo_all_blocks=1 00:18:55.269 --rc geninfo_unexecuted_blocks=1 00:18:55.269 00:18:55.269 ' 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:55.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.269 --rc genhtml_branch_coverage=1 00:18:55.269 --rc genhtml_function_coverage=1 00:18:55.269 --rc genhtml_legend=1 00:18:55.269 --rc geninfo_all_blocks=1 00:18:55.269 --rc geninfo_unexecuted_blocks=1 00:18:55.269 00:18:55.269 ' 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.269 07:01:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.411 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.411 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:03.411 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:03.411 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:03.411 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:03.411 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:03.411 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:03.411 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:03.412 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:03.412 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:03.412 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:03.412 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:03.412 07:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:03.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:19:03.412 00:19:03.412 --- 10.0.0.2 ping statistics --- 00:19:03.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.412 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:19:03.412 00:19:03.412 --- 10.0.0.1 ping statistics --- 00:19:03.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.412 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:03.412 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=3133717 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 3133717 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3133717 ']' 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.413 07:02:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.413 [2024-10-16 07:02:02.227442] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:03.413 [2024-10-16 07:02:02.227512] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:03.413 [2024-10-16 07:02:02.326005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:03.413 [2024-10-16 07:02:02.387191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.413 [2024-10-16 07:02:02.387238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.413 [2024-10-16 07:02:02.387247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.413 [2024-10-16 07:02:02.387254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.413 [2024-10-16 07:02:02.387261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.413 [2024-10-16 07:02:02.388807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:03.413 [2024-10-16 07:02:02.388939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:03.413 [2024-10-16 07:02:02.389256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:03.413 [2024-10-16 07:02:02.389260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.674 [2024-10-16 07:02:03.106929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.674 Malloc0 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.674 [2024-10-16 07:02:03.161049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:03.674 { 00:19:03.674 "params": { 00:19:03.674 "name": "Nvme$subsystem", 00:19:03.674 "trtype": "$TEST_TRANSPORT", 00:19:03.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.674 "adrfam": "ipv4", 00:19:03.674 "trsvcid": "$NVMF_PORT", 00:19:03.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.674 "hdgst": ${hdgst:-false}, 00:19:03.674 "ddgst": ${ddgst:-false} 00:19:03.674 }, 00:19:03.674 "method": "bdev_nvme_attach_controller" 00:19:03.674 } 00:19:03.674 EOF 00:19:03.674 )") 00:19:03.674 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:19:03.935 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:19:03.935 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:19:03.935 07:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:03.935 "params": { 00:19:03.935 "name": "Nvme1", 00:19:03.935 "trtype": "tcp", 00:19:03.935 "traddr": "10.0.0.2", 00:19:03.935 "adrfam": "ipv4", 00:19:03.935 "trsvcid": "4420", 00:19:03.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.935 "hdgst": false, 00:19:03.935 "ddgst": false 00:19:03.935 }, 00:19:03.935 "method": "bdev_nvme_attach_controller" 00:19:03.935 }' 00:19:03.935 [2024-10-16 07:02:03.218633] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:03.935 [2024-10-16 07:02:03.218703] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3134067 ] 00:19:03.935 [2024-10-16 07:02:03.301967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:03.935 [2024-10-16 07:02:03.361976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.935 [2024-10-16 07:02:03.362140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.935 [2024-10-16 07:02:03.362142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.508 I/O targets: 00:19:04.508 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:04.508 00:19:04.508 00:19:04.508 CUnit - A unit testing framework for C - Version 2.1-3 00:19:04.508 http://cunit.sourceforge.net/ 00:19:04.508 00:19:04.508 00:19:04.508 Suite: bdevio tests on: Nvme1n1 00:19:04.508 Test: blockdev write read block ...passed 00:19:04.508 Test: blockdev write zeroes read block ...passed 00:19:04.508 Test: blockdev write zeroes read no split ...passed 00:19:04.508 Test: blockdev write zeroes read split ...passed 00:19:04.508 Test: blockdev write zeroes read split partial ...passed 00:19:04.508 Test: blockdev reset ...[2024-10-16 07:02:03.843497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.508 [2024-10-16 07:02:03.843597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24638c0 (9): Bad file descriptor 00:19:04.508 [2024-10-16 07:02:03.940911] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:04.508 passed 00:19:04.508 Test: blockdev write read 8 blocks ...passed 00:19:04.508 Test: blockdev write read size > 128k ...passed 00:19:04.508 Test: blockdev write read invalid size ...passed 00:19:04.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:04.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:04.770 Test: blockdev write read max offset ...passed 00:19:04.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:04.770 Test: blockdev writev readv 8 blocks ...passed 00:19:04.770 Test: blockdev writev readv 30 x 1block ...passed 00:19:04.770 Test: blockdev writev readv block ...passed 00:19:04.770 Test: blockdev writev readv size > 128k ...passed 00:19:04.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:04.770 Test: blockdev comparev and writev ...[2024-10-16 07:02:04.206827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.770 [2024-10-16 07:02:04.206879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:04.770 [2024-10-16 07:02:04.206897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.770 [2024-10-16 07:02:04.206906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:04.770 [2024-10-16 07:02:04.207454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.770 [2024-10-16 07:02:04.207470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:04.770 [2024-10-16 07:02:04.207485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.770 [2024-10-16 07:02:04.207494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:04.770 [2024-10-16 07:02:04.208075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.770 [2024-10-16 07:02:04.208086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:04.770 [2024-10-16 07:02:04.208101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.770 [2024-10-16 07:02:04.208109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:04.770 [2024-10-16 07:02:04.208636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.770 [2024-10-16 07:02:04.208654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:04.770 [2024-10-16 07:02:04.208668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:04.770 [2024-10-16 07:02:04.208676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:04.770 passed 00:19:05.032 Test: blockdev nvme passthru rw ...passed 00:19:05.032 Test: blockdev nvme passthru vendor specific ...[2024-10-16 07:02:04.293507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.032 [2024-10-16 07:02:04.293527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:05.032 [2024-10-16 07:02:04.293898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.032 [2024-10-16 07:02:04.293911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:05.032 [2024-10-16 07:02:04.294294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.032 [2024-10-16 07:02:04.294304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:05.032 [2024-10-16 07:02:04.294659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.032 [2024-10-16 07:02:04.294671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:05.032 passed 00:19:05.032 Test: blockdev nvme admin passthru ...passed 00:19:05.032 Test: blockdev copy ...passed 00:19:05.032 00:19:05.032 Run Summary: Type Total Ran Passed Failed Inactive 00:19:05.032 suites 1 1 n/a 0 0 00:19:05.032 tests 23 23 23 0 0 00:19:05.032 asserts 152 152 152 0 n/a 00:19:05.032 00:19:05.032 Elapsed time = 1.315 seconds 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:05.292 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:05.293 rmmod nvme_tcp 00:19:05.293 rmmod nvme_fabrics 00:19:05.293 rmmod nvme_keyring 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 3133717 ']' 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 3133717 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3133717 ']' 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3133717 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.293 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3133717 00:19:05.553 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:05.553 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:05.553 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3133717' 00:19:05.553 killing process with pid 3133717 00:19:05.553 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3133717 00:19:05.553 07:02:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3133717 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.553 07:02:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:08.105 00:19:08.105 real 0m12.621s 00:19:08.105 user 0m15.368s 00:19:08.105 sys 0m6.635s 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:08.105 ************************************ 00:19:08.105 END TEST nvmf_bdevio_no_huge 00:19:08.105 ************************************ 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:08.105 ************************************ 00:19:08.105 START TEST nvmf_tls 00:19:08.105 ************************************ 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:08.105 * Looking for test storage... 00:19:08.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.105 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:08.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.106 --rc genhtml_branch_coverage=1 00:19:08.106 --rc genhtml_function_coverage=1 00:19:08.106 --rc genhtml_legend=1 00:19:08.106 --rc geninfo_all_blocks=1 00:19:08.106 --rc geninfo_unexecuted_blocks=1 00:19:08.106 00:19:08.106 ' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:08.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.106 --rc genhtml_branch_coverage=1 00:19:08.106 --rc genhtml_function_coverage=1 00:19:08.106 --rc genhtml_legend=1 00:19:08.106 --rc geninfo_all_blocks=1 00:19:08.106 --rc geninfo_unexecuted_blocks=1 00:19:08.106 00:19:08.106 ' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:08.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.106 --rc genhtml_branch_coverage=1 00:19:08.106 --rc genhtml_function_coverage=1 00:19:08.106 --rc genhtml_legend=1 00:19:08.106 --rc geninfo_all_blocks=1 00:19:08.106 --rc geninfo_unexecuted_blocks=1 00:19:08.106 00:19:08.106 ' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:08.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.106 --rc genhtml_branch_coverage=1 00:19:08.106 --rc genhtml_function_coverage=1 00:19:08.106 --rc genhtml_legend=1 00:19:08.106 --rc geninfo_all_blocks=1 00:19:08.106 --rc geninfo_unexecuted_blocks=1 00:19:08.106 00:19:08.106 ' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:08.106 07:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.247 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:16.248 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:16.248 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:16.248 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:16.248 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:16.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:19:16.248 00:19:16.248 --- 10.0.0.2 ping statistics --- 00:19:16.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.248 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:19:16.248 00:19:16.248 --- 10.0.0.1 ping statistics --- 00:19:16.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.248 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.248 07:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.248 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3138479 00:19:16.248 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3138479 00:19:16.248 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:16.248 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3138479 ']' 00:19:16.248 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.248 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.248 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.248 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.248 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.248 [2024-10-16 07:02:15.061289] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:16.248 [2024-10-16 07:02:15.061360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.248 [2024-10-16 07:02:15.135394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.248 [2024-10-16 07:02:15.187700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.248 [2024-10-16 07:02:15.187754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.248 [2024-10-16 07:02:15.187763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.248 [2024-10-16 07:02:15.187770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.248 [2024-10-16 07:02:15.187776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.249 [2024-10-16 07:02:15.188586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.509 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.509 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:16.509 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:16.509 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.509 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.509 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.509 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:16.509 07:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:16.770 true 00:19:16.770 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:16.770 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:17.031 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:17.031 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:17.031 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:17.031 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:17.031 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:17.291 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:17.292 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:17.292 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:17.552 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:17.552 07:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:17.814 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:17.814 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:17.814 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:17.814 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:17.814 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:17.814 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:17.814 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:18.075 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.075 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:18.335 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:18.335 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:18.335 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:18.335 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.335 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:18.596 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:18.596 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:18.596 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:18.596 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:18.596 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:18.596 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:18.596 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:19:18.596 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:18.596 07:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.44hjdLhSn8 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.u4xrxuUXMO 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.44hjdLhSn8 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.u4xrxuUXMO 00:19:18.596 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:18.856 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:19.117 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.44hjdLhSn8 00:19:19.117 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.44hjdLhSn8 00:19:19.117 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.377 [2024-10-16 07:02:18.634391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.377 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.377 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.638 [2024-10-16 07:02:18.975224] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.638 [2024-10-16 07:02:18.975431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.638 07:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.898 malloc0 00:19:19.898 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:19.898 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.44hjdLhSn8 00:19:20.159 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.159 07:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.44hjdLhSn8 00:19:32.386 Initializing NVMe Controllers 00:19:32.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:32.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:32.386 Initialization complete. Launching workers. 00:19:32.386 ======================================================== 00:19:32.386 Latency(us) 00:19:32.386 Device Information : IOPS MiB/s Average min max 00:19:32.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18557.94 72.49 3448.88 1009.11 5195.71 00:19:32.386 ======================================================== 00:19:32.386 Total : 18557.94 72.49 3448.88 1009.11 5195.71 00:19:32.386 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.44hjdLhSn8 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.44hjdLhSn8 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3141474 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3141474 /var/tmp/bdevperf.sock 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.386 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3141474 ']' 00:19:32.387 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.387 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.387 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.387 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.387 07:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.387 [2024-10-16 07:02:29.814799] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:32.387 [2024-10-16 07:02:29.814868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3141474 ] 00:19:32.387 [2024-10-16 07:02:29.892609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.387 [2024-10-16 07:02:29.927930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.387 07:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.387 07:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.387 07:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.44hjdLhSn8 00:19:32.387 07:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.387 [2024-10-16 07:02:30.943105] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.387 TLSTESTn1 00:19:32.387 07:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:32.387 Running I/O for 10 seconds... 00:19:34.030 5664.00 IOPS, 22.12 MiB/s [2024-10-16T05:02:34.471Z] 5495.50 IOPS, 21.47 MiB/s [2024-10-16T05:02:35.413Z] 5197.33 IOPS, 20.30 MiB/s [2024-10-16T05:02:36.355Z] 5289.75 IOPS, 20.66 MiB/s [2024-10-16T05:02:37.298Z] 5415.80 IOPS, 21.16 MiB/s [2024-10-16T05:02:38.240Z] 5516.67 IOPS, 21.55 MiB/s [2024-10-16T05:02:39.182Z] 5500.57 IOPS, 21.49 MiB/s [2024-10-16T05:02:40.568Z] 5533.88 IOPS, 21.62 MiB/s [2024-10-16T05:02:41.511Z] 5643.11 IOPS, 22.04 MiB/s [2024-10-16T05:02:41.511Z] 5678.40 IOPS, 22.18 MiB/s 00:19:42.012 Latency(us) 00:19:42.012 [2024-10-16T05:02:41.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.012 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:42.012 Verification LBA range: start 0x0 length 0x2000 00:19:42.012 TLSTESTn1 : 10.01 5684.04 22.20 0.00 0.00 22485.52 4942.51 28617.39 00:19:42.012 [2024-10-16T05:02:41.511Z] =================================================================================================================== 00:19:42.012 [2024-10-16T05:02:41.511Z] Total : 5684.04 22.20 0.00 0.00 22485.52 4942.51 28617.39 00:19:42.012 { 00:19:42.012 "results": [ 00:19:42.012 { 00:19:42.012 "job": "TLSTESTn1", 00:19:42.012 "core_mask": "0x4", 00:19:42.012 "workload": "verify", 00:19:42.012 "status": "finished", 00:19:42.012 "verify_range": { 00:19:42.012 "start": 0, 00:19:42.012 "length": 8192 00:19:42.012 }, 00:19:42.012 "queue_depth": 128, 00:19:42.012 "io_size": 4096, 00:19:42.012 "runtime": 10.012593, 00:19:42.012 "iops": 5684.04208580135, 00:19:42.012 "mibps": 22.203289397661525, 00:19:42.012 "io_failed": 0, 00:19:42.012 "io_timeout": 0, 00:19:42.012 "avg_latency_us": 22485.524055852307, 00:19:42.012 "min_latency_us": 4942.506666666667, 00:19:42.012 "max_latency_us": 28617.386666666665 00:19:42.012 } 00:19:42.012 ], 00:19:42.012 "core_count": 1 00:19:42.012 } 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3141474 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3141474 ']' 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3141474 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3141474 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3141474' 00:19:42.012 killing process with pid 3141474 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3141474 00:19:42.012 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.012 00:19:42.012 Latency(us) 00:19:42.012 [2024-10-16T05:02:41.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.012 [2024-10-16T05:02:41.511Z] =================================================================================================================== 00:19:42.012 [2024-10-16T05:02:41.511Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3141474 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.u4xrxuUXMO 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.u4xrxuUXMO 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.u4xrxuUXMO 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.u4xrxuUXMO 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3143666 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3143666 /var/tmp/bdevperf.sock 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.012 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3143666 ']' 00:19:42.013 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.013 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.013 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.013 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.013 07:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.013 [2024-10-16 07:02:41.412515] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:42.013 [2024-10-16 07:02:41.412574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143666 ] 00:19:42.013 [2024-10-16 07:02:41.490486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.273 [2024-10-16 07:02:41.519471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.844 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.844 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:42.845 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.u4xrxuUXMO 00:19:43.106 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.106 [2024-10-16 07:02:42.493850] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.106 [2024-10-16 07:02:42.503285] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:43.106 [2024-10-16 07:02:42.503857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd4c70 (107): Transport endpoint is not connected 00:19:43.107 [2024-10-16 07:02:42.504853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd4c70 (9): Bad file descriptor 00:19:43.107 [2024-10-16 07:02:42.505854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.107 [2024-10-16 07:02:42.505862] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:43.107 [2024-10-16 07:02:42.505867] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:43.107 [2024-10-16 07:02:42.505875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.107 request: 00:19:43.107 { 00:19:43.107 "name": "TLSTEST", 00:19:43.107 "trtype": "tcp", 00:19:43.107 "traddr": "10.0.0.2", 00:19:43.107 "adrfam": "ipv4", 00:19:43.107 "trsvcid": "4420", 00:19:43.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.107 "prchk_reftag": false, 00:19:43.107 "prchk_guard": false, 00:19:43.107 "hdgst": false, 00:19:43.107 "ddgst": false, 00:19:43.107 "psk": "key0", 00:19:43.107 "allow_unrecognized_csi": false, 00:19:43.107 "method": "bdev_nvme_attach_controller", 00:19:43.107 "req_id": 1 00:19:43.107 } 00:19:43.107 Got JSON-RPC error response 00:19:43.107 response: 00:19:43.107 { 00:19:43.107 "code": -5, 00:19:43.107 "message": "Input/output error" 00:19:43.107 } 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3143666 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3143666 ']' 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3143666 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3143666 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3143666' 00:19:43.107 killing process with pid 3143666 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3143666 00:19:43.107 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.107 00:19:43.107 Latency(us) 00:19:43.107 [2024-10-16T05:02:42.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.107 [2024-10-16T05:02:42.606Z] =================================================================================================================== 00:19:43.107 [2024-10-16T05:02:42.606Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.107 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3143666 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.44hjdLhSn8 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.44hjdLhSn8 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.44hjdLhSn8 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.44hjdLhSn8 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3143852 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3143852 /var/tmp/bdevperf.sock 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3143852 ']' 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.368 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.368 [2024-10-16 07:02:42.733795] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:43.368 [2024-10-16 07:02:42.733859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143852 ] 00:19:43.368 [2024-10-16 07:02:42.807735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.368 [2024-10-16 07:02:42.836523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.629 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.629 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.629 07:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.44hjdLhSn8 00:19:43.629 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:43.889 [2024-10-16 07:02:43.245312] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.889 [2024-10-16 07:02:43.255014] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:43.889 [2024-10-16 07:02:43.255033] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:43.890 [2024-10-16 07:02:43.255051] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:43.890 [2024-10-16 07:02:43.255534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c3c70 (107): Transport endpoint is not connected 00:19:43.890 [2024-10-16 07:02:43.256530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c3c70 (9): Bad file descriptor 00:19:43.890 [2024-10-16 07:02:43.257532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.890 [2024-10-16 07:02:43.257540] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:43.890 [2024-10-16 07:02:43.257546] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:43.890 [2024-10-16 07:02:43.257553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.890 request: 00:19:43.890 { 00:19:43.890 "name": "TLSTEST", 00:19:43.890 "trtype": "tcp", 00:19:43.890 "traddr": "10.0.0.2", 00:19:43.890 "adrfam": "ipv4", 00:19:43.890 "trsvcid": "4420", 00:19:43.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.890 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:43.890 "prchk_reftag": false, 00:19:43.890 "prchk_guard": false, 00:19:43.890 "hdgst": false, 00:19:43.890 "ddgst": false, 00:19:43.890 "psk": "key0", 00:19:43.890 "allow_unrecognized_csi": false, 00:19:43.890 "method": "bdev_nvme_attach_controller", 00:19:43.890 "req_id": 1 00:19:43.890 } 00:19:43.890 Got JSON-RPC error response 00:19:43.890 response: 00:19:43.890 { 00:19:43.890 "code": -5, 00:19:43.890 "message": "Input/output error" 00:19:43.890 } 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3143852 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3143852 ']' 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3143852 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3143852 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3143852' 00:19:43.890 killing process with pid 3143852 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3143852 00:19:43.890 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.890 00:19:43.890 Latency(us) 00:19:43.890 [2024-10-16T05:02:43.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.890 [2024-10-16T05:02:43.389Z] =================================================================================================================== 00:19:43.890 [2024-10-16T05:02:43.389Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.890 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3143852 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.44hjdLhSn8 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.44hjdLhSn8 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.44hjdLhSn8 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.44hjdLhSn8 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3144170 00:19:44.150 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:44.151 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3144170 /var/tmp/bdevperf.sock 00:19:44.151 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.151 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3144170 ']' 00:19:44.151 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.151 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.151 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.151 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.151 07:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.151 [2024-10-16 07:02:43.505791] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:44.151 [2024-10-16 07:02:43.505852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144170 ] 00:19:44.151 [2024-10-16 07:02:43.582408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.151 [2024-10-16 07:02:43.610244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.092 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.092 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:45.092 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.44hjdLhSn8 00:19:45.092 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.092 [2024-10-16 07:02:44.592338] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.353 [2024-10-16 07:02:44.598291] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:45.353 [2024-10-16 07:02:44.598309] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:45.353 [2024-10-16 07:02:44.598327] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:45.353 [2024-10-16 07:02:44.598469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8c70 (107): Transport endpoint is not connected 00:19:45.353 [2024-10-16 07:02:44.599463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8c70 (9): Bad file descriptor 00:19:45.353 [2024-10-16 07:02:44.600465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:45.353 [2024-10-16 07:02:44.600473] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:45.353 [2024-10-16 07:02:44.600479] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:45.353 [2024-10-16 07:02:44.600488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:45.353 request: 00:19:45.353 { 00:19:45.353 "name": "TLSTEST", 00:19:45.353 "trtype": "tcp", 00:19:45.353 "traddr": "10.0.0.2", 00:19:45.353 "adrfam": "ipv4", 00:19:45.353 "trsvcid": "4420", 00:19:45.353 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:45.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.353 "prchk_reftag": false, 00:19:45.353 "prchk_guard": false, 00:19:45.353 "hdgst": false, 00:19:45.353 "ddgst": false, 00:19:45.353 "psk": "key0", 00:19:45.353 "allow_unrecognized_csi": false, 00:19:45.353 "method": "bdev_nvme_attach_controller", 00:19:45.353 "req_id": 1 00:19:45.353 } 00:19:45.353 Got JSON-RPC error response 00:19:45.353 response: 00:19:45.353 { 00:19:45.353 "code": -5, 00:19:45.353 "message": "Input/output error" 00:19:45.353 } 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3144170 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3144170 ']' 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3144170 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3144170 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3144170' 00:19:45.353 killing process with pid 3144170 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3144170 00:19:45.353 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.353 00:19:45.353 Latency(us) 00:19:45.353 [2024-10-16T05:02:44.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.353 [2024-10-16T05:02:44.852Z] =================================================================================================================== 00:19:45.353 [2024-10-16T05:02:44.852Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3144170 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3144370 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3144370 /var/tmp/bdevperf.sock 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3144370 ']' 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.353 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.354 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.354 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.354 07:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.354 [2024-10-16 07:02:44.833128] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:45.354 [2024-10-16 07:02:44.833188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144370 ] 00:19:45.614 [2024-10-16 07:02:44.907138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.614 [2024-10-16 07:02:44.935945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.614 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.614 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:45.614 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:45.874 [2024-10-16 07:02:45.168268] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:45.874 [2024-10-16 07:02:45.168290] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:45.874 request: 00:19:45.874 { 00:19:45.874 "name": "key0", 00:19:45.874 "path": "", 00:19:45.874 "method": "keyring_file_add_key", 00:19:45.874 "req_id": 1 00:19:45.874 } 00:19:45.874 Got JSON-RPC error response 00:19:45.874 response: 00:19:45.874 { 00:19:45.874 "code": -1, 00:19:45.874 "message": "Operation not permitted" 00:19:45.874 } 00:19:45.875 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.875 [2024-10-16 07:02:45.336763] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.875 [2024-10-16 07:02:45.336786] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:45.875 request: 00:19:45.875 { 00:19:45.875 "name": "TLSTEST", 00:19:45.875 "trtype": "tcp", 00:19:45.875 "traddr": "10.0.0.2", 00:19:45.875 "adrfam": "ipv4", 00:19:45.875 "trsvcid": "4420", 00:19:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.875 "prchk_reftag": false, 00:19:45.875 "prchk_guard": false, 00:19:45.875 "hdgst": false, 00:19:45.875 "ddgst": false, 00:19:45.875 "psk": "key0", 00:19:45.875 "allow_unrecognized_csi": false, 00:19:45.875 "method": "bdev_nvme_attach_controller", 00:19:45.875 "req_id": 1 00:19:45.875 } 00:19:45.875 Got JSON-RPC error response 00:19:45.875 response: 00:19:45.875 { 00:19:45.875 "code": -126, 00:19:45.875 "message": "Required key not available" 00:19:45.875 } 00:19:45.875 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3144370 00:19:45.875 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3144370 ']' 00:19:45.875 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3144370 00:19:45.875 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3144370 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3144370' 00:19:46.136 killing process with pid 3144370 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3144370 00:19:46.136 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.136 00:19:46.136 Latency(us) 00:19:46.136 [2024-10-16T05:02:45.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.136 [2024-10-16T05:02:45.635Z] =================================================================================================================== 00:19:46.136 [2024-10-16T05:02:45.635Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3144370 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3138479 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3138479 ']' 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3138479 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3138479 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3138479' 00:19:46.136 killing process with pid 3138479 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3138479 00:19:46.136 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3138479 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ao4i7Bfgcc 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ao4i7Bfgcc 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3144543 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3144543 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3144543 ']' 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.397 07:02:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.397 [2024-10-16 07:02:45.825630] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:46.397 [2024-10-16 07:02:45.825707] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.659 [2024-10-16 07:02:45.909759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.659 [2024-10-16 07:02:45.939714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.659 [2024-10-16 07:02:45.939743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.659 [2024-10-16 07:02:45.939749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.659 [2024-10-16 07:02:45.939754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.659 [2024-10-16 07:02:45.939758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.659 [2024-10-16 07:02:45.940201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.231 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.231 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.231 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:47.231 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:47.231 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.231 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.231 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ao4i7Bfgcc 00:19:47.232 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ao4i7Bfgcc 00:19:47.232 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:47.493 [2024-10-16 07:02:46.807545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.493 07:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.754 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:47.754 [2024-10-16 07:02:47.164424] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.754 [2024-10-16 07:02:47.164622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.754 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:48.015 malloc0 00:19:48.015 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:48.275 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ao4i7Bfgcc 00:19:48.276 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ao4i7Bfgcc 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ao4i7Bfgcc 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3144932 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3144932 /var/tmp/bdevperf.sock 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3144932 ']' 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.537 07:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.537 [2024-10-16 07:02:47.952193] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:19:48.537 [2024-10-16 07:02:47.952247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144932 ] 00:19:48.537 [2024-10-16 07:02:48.029555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.797 [2024-10-16 07:02:48.058509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.368 07:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.368 07:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:49.368 07:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ao4i7Bfgcc 00:19:49.628 07:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.628 [2024-10-16 07:02:49.072922] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.888 TLSTESTn1 00:19:49.888 07:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:49.888 Running I/O for 10 seconds... 00:19:51.774 5602.00 IOPS, 21.88 MiB/s [2024-10-16T05:02:52.308Z] 5193.00 IOPS, 20.29 MiB/s [2024-10-16T05:02:53.691Z] 5567.33 IOPS, 21.75 MiB/s [2024-10-16T05:02:54.634Z] 5615.25 IOPS, 21.93 MiB/s [2024-10-16T05:02:55.575Z] 5582.00 IOPS, 21.80 MiB/s [2024-10-16T05:02:56.515Z] 5306.17 IOPS, 20.73 MiB/s [2024-10-16T05:02:57.455Z] 5388.14 IOPS, 21.05 MiB/s [2024-10-16T05:02:58.394Z] 5399.00 IOPS, 21.09 MiB/s [2024-10-16T05:02:59.335Z] 5384.11 IOPS, 21.03 MiB/s [2024-10-16T05:02:59.335Z] 5367.60 IOPS, 20.97 MiB/s 00:19:59.836 Latency(us) 00:19:59.836 [2024-10-16T05:02:59.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.836 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.836 Verification LBA range: start 0x0 length 0x2000 00:19:59.836 TLSTESTn1 : 10.02 5372.01 20.98 0.00 0.00 23793.25 5188.27 24685.23 00:19:59.836 [2024-10-16T05:02:59.335Z] =================================================================================================================== 00:19:59.836 [2024-10-16T05:02:59.335Z] Total : 5372.01 20.98 0.00 0.00 23793.25 5188.27 24685.23 00:19:59.836 { 00:19:59.836 "results": [ 00:19:59.836 { 00:19:59.836 "job": "TLSTESTn1", 00:19:59.836 "core_mask": "0x4", 00:19:59.836 "workload": "verify", 00:19:59.836 "status": "finished", 00:19:59.836 "verify_range": { 00:19:59.836 "start": 0, 00:19:59.836 "length": 8192 00:19:59.836 }, 00:19:59.836 "queue_depth": 128, 00:19:59.836 "io_size": 4096, 00:19:59.836 "runtime": 10.015254, 00:19:59.836 "iops": 5372.005542745097, 00:19:59.836 "mibps": 20.984396651348035, 00:19:59.836 "io_failed": 0, 00:19:59.836 "io_timeout": 0, 00:19:59.836 "avg_latency_us": 23793.25228355823, 00:19:59.836 "min_latency_us": 5188.266666666666, 00:19:59.836 "max_latency_us": 24685.226666666666 00:19:59.836 } 00:19:59.836 ], 00:19:59.836 "core_count": 1 00:19:59.836 } 00:19:59.836 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.836 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3144932 00:19:59.836 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3144932 ']' 00:19:59.836 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3144932 00:19:59.836 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.836 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.836 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3144932 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3144932' 00:20:00.098 killing process with pid 3144932 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3144932 00:20:00.098 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.098 00:20:00.098 Latency(us) 00:20:00.098 [2024-10-16T05:02:59.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.098 [2024-10-16T05:02:59.597Z] =================================================================================================================== 00:20:00.098 [2024-10-16T05:02:59.597Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3144932 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ao4i7Bfgcc 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ao4i7Bfgcc 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ao4i7Bfgcc 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ao4i7Bfgcc 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ao4i7Bfgcc 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3147257 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3147257 /var/tmp/bdevperf.sock 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3147257 ']' 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.098 07:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.098 [2024-10-16 07:02:59.552808] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:00.098 [2024-10-16 07:02:59.552868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147257 ] 00:20:00.358 [2024-10-16 07:02:59.630340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.358 [2024-10-16 07:02:59.658510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.931 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.931 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:00.931 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ao4i7Bfgcc 00:20:01.191 [2024-10-16 07:03:00.504537] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ao4i7Bfgcc': 0100666 00:20:01.191 [2024-10-16 07:03:00.504567] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:01.191 request: 00:20:01.191 { 00:20:01.191 "name": "key0", 00:20:01.191 "path": "/tmp/tmp.ao4i7Bfgcc", 00:20:01.191 "method": "keyring_file_add_key", 00:20:01.191 "req_id": 1 00:20:01.191 } 00:20:01.191 Got JSON-RPC error response 00:20:01.191 response: 00:20:01.191 { 00:20:01.191 "code": -1, 00:20:01.191 "message": "Operation not permitted" 00:20:01.191 } 00:20:01.191 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.191 [2024-10-16 07:03:00.689076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.191 [2024-10-16 07:03:00.689098] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:01.452 request: 00:20:01.452 { 00:20:01.452 "name": "TLSTEST", 00:20:01.452 "trtype": "tcp", 00:20:01.452 "traddr": "10.0.0.2", 00:20:01.452 "adrfam": "ipv4", 00:20:01.452 "trsvcid": "4420", 00:20:01.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.452 "prchk_reftag": false, 00:20:01.452 "prchk_guard": false, 00:20:01.452 "hdgst": false, 00:20:01.452 "ddgst": false, 00:20:01.452 "psk": "key0", 00:20:01.452 "allow_unrecognized_csi": false, 00:20:01.452 "method": "bdev_nvme_attach_controller", 00:20:01.452 "req_id": 1 00:20:01.452 } 00:20:01.452 Got JSON-RPC error response 00:20:01.452 response: 00:20:01.452 { 00:20:01.452 "code": -126, 00:20:01.452 "message": "Required key not available" 00:20:01.452 } 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3147257 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3147257 ']' 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3147257 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3147257 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3147257' 00:20:01.452 killing process with pid 3147257 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3147257 00:20:01.452 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.452 00:20:01.452 Latency(us) 00:20:01.452 [2024-10-16T05:03:00.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.452 [2024-10-16T05:03:00.951Z] =================================================================================================================== 00:20:01.452 [2024-10-16T05:03:00.951Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3147257 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3144543 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3144543 ']' 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3144543 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3144543 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3144543' 00:20:01.452 killing process with pid 3144543 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3144543 00:20:01.452 07:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3144543 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3147659 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3147659 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3147659 ']' 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.713 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.713 [2024-10-16 07:03:01.110114] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:01.713 [2024-10-16 07:03:01.110171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.713 [2024-10-16 07:03:01.193583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.973 [2024-10-16 07:03:01.222687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.973 [2024-10-16 07:03:01.222720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.973 [2024-10-16 07:03:01.222726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.973 [2024-10-16 07:03:01.222731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.973 [2024-10-16 07:03:01.222736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.973 [2024-10-16 07:03:01.223194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.543 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.543 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:02.543 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:02.543 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ao4i7Bfgcc 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ao4i7Bfgcc 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ao4i7Bfgcc 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ao4i7Bfgcc 00:20:02.544 07:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.804 [2024-10-16 07:03:02.114594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.804 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.065 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.065 [2024-10-16 07:03:02.467470] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.065 [2024-10-16 07:03:02.467679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.065 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.325 malloc0 00:20:03.325 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:03.584 07:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ao4i7Bfgcc 00:20:03.584 [2024-10-16 07:03:03.002530] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ao4i7Bfgcc': 0100666 00:20:03.584 [2024-10-16 07:03:03.002550] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:03.584 request: 00:20:03.584 { 00:20:03.584 "name": "key0", 00:20:03.584 "path": "/tmp/tmp.ao4i7Bfgcc", 00:20:03.584 "method": "keyring_file_add_key", 00:20:03.584 "req_id": 1 00:20:03.584 } 00:20:03.584 Got JSON-RPC error response 00:20:03.584 response: 00:20:03.585 { 00:20:03.585 "code": -1, 00:20:03.585 "message": "Operation not permitted" 00:20:03.585 } 00:20:03.585 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:03.845 [2024-10-16 07:03:03.174970] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:03.845 [2024-10-16 07:03:03.174996] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:03.845 request: 00:20:03.845 { 00:20:03.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.845 "host": "nqn.2016-06.io.spdk:host1", 00:20:03.845 "psk": "key0", 00:20:03.845 "method": "nvmf_subsystem_add_host", 00:20:03.845 "req_id": 1 00:20:03.845 } 00:20:03.845 Got JSON-RPC error response 00:20:03.845 response: 00:20:03.845 { 00:20:03.845 "code": -32603, 00:20:03.845 "message": "Internal error" 00:20:03.845 } 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3147659 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3147659 ']' 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3147659 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3147659 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3147659' 00:20:03.845 killing process with pid 3147659 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3147659 00:20:03.845 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3147659 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ao4i7Bfgcc 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3148092 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3148092 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3148092 ']' 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.106 07:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.106 [2024-10-16 07:03:03.442934] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:04.106 [2024-10-16 07:03:03.442990] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.106 [2024-10-16 07:03:03.524393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.106 [2024-10-16 07:03:03.552863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.106 [2024-10-16 07:03:03.552893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.106 [2024-10-16 07:03:03.552898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.106 [2024-10-16 07:03:03.552903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.106 [2024-10-16 07:03:03.552907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.106 [2024-10-16 07:03:03.553375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ao4i7Bfgcc 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ao4i7Bfgcc 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:05.046 [2024-10-16 07:03:04.436577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.046 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:05.306 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:05.306 [2024-10-16 07:03:04.757365] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.306 [2024-10-16 07:03:04.757565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.306 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:05.566 malloc0 00:20:05.566 07:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:05.826 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ao4i7Bfgcc 00:20:05.826 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:06.086 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.086 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3148562 00:20:06.087 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.087 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3148562 /var/tmp/bdevperf.sock 00:20:06.087 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3148562 ']' 00:20:06.087 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.087 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.087 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.087 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.087 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.087 [2024-10-16 07:03:05.459694] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:06.087 [2024-10-16 07:03:05.459753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148562 ] 00:20:06.087 [2024-10-16 07:03:05.536801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.087 [2024-10-16 07:03:05.565902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.347 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.347 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:06.347 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ao4i7Bfgcc 00:20:06.347 07:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:06.607 [2024-10-16 07:03:05.942815] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.607 TLSTESTn1 00:20:06.607 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:06.868 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:06.868 "subsystems": [ 00:20:06.868 { 00:20:06.868 "subsystem": "keyring", 00:20:06.868 "config": [ 00:20:06.868 { 00:20:06.868 "method": "keyring_file_add_key", 00:20:06.868 "params": { 00:20:06.868 "name": "key0", 00:20:06.868 "path": "/tmp/tmp.ao4i7Bfgcc" 00:20:06.868 } 00:20:06.868 } 00:20:06.868 ] 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "subsystem": "iobuf", 00:20:06.868 "config": [ 00:20:06.868 { 00:20:06.868 "method": "iobuf_set_options", 00:20:06.868 "params": { 00:20:06.868 "small_pool_count": 8192, 00:20:06.868 "large_pool_count": 1024, 00:20:06.868 "small_bufsize": 8192, 00:20:06.868 "large_bufsize": 135168 00:20:06.868 } 00:20:06.868 } 00:20:06.868 ] 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "subsystem": "sock", 00:20:06.868 "config": [ 00:20:06.868 { 00:20:06.868 "method": "sock_set_default_impl", 00:20:06.868 "params": { 00:20:06.868 "impl_name": "posix" 00:20:06.868 } 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "method": "sock_impl_set_options", 00:20:06.868 "params": { 00:20:06.868 "impl_name": "ssl", 00:20:06.868 "recv_buf_size": 4096, 00:20:06.868 "send_buf_size": 4096, 00:20:06.868 "enable_recv_pipe": true, 00:20:06.868 "enable_quickack": false, 00:20:06.868 "enable_placement_id": 0, 00:20:06.868 "enable_zerocopy_send_server": true, 00:20:06.868 "enable_zerocopy_send_client": false, 00:20:06.868 "zerocopy_threshold": 0, 00:20:06.868 "tls_version": 0, 00:20:06.868 "enable_ktls": false 00:20:06.868 } 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "method": "sock_impl_set_options", 00:20:06.868 "params": { 00:20:06.868 "impl_name": "posix", 00:20:06.868 "recv_buf_size": 2097152, 00:20:06.868 "send_buf_size": 2097152, 00:20:06.868 "enable_recv_pipe": true, 00:20:06.868 "enable_quickack": false, 00:20:06.868 "enable_placement_id": 0, 00:20:06.868 "enable_zerocopy_send_server": true, 00:20:06.868 "enable_zerocopy_send_client": false, 00:20:06.868 "zerocopy_threshold": 0, 00:20:06.868 "tls_version": 0, 00:20:06.868 "enable_ktls": false 00:20:06.868 } 00:20:06.868 } 00:20:06.868 ] 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "subsystem": "vmd", 00:20:06.868 "config": [] 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "subsystem": "accel", 00:20:06.868 "config": [ 00:20:06.868 { 00:20:06.868 "method": "accel_set_options", 00:20:06.868 "params": { 00:20:06.868 "small_cache_size": 128, 00:20:06.868 "large_cache_size": 16, 00:20:06.868 "task_count": 2048, 00:20:06.868 "sequence_count": 2048, 00:20:06.868 "buf_count": 2048 00:20:06.868 } 00:20:06.868 } 00:20:06.868 ] 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "subsystem": "bdev", 00:20:06.868 "config": [ 00:20:06.868 { 00:20:06.868 "method": "bdev_set_options", 00:20:06.868 "params": { 00:20:06.868 "bdev_io_pool_size": 65535, 00:20:06.868 "bdev_io_cache_size": 256, 00:20:06.868 "bdev_auto_examine": true, 00:20:06.868 "iobuf_small_cache_size": 128, 00:20:06.868 "iobuf_large_cache_size": 16 00:20:06.868 } 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "method": "bdev_raid_set_options", 00:20:06.868 "params": { 00:20:06.868 "process_window_size_kb": 1024, 00:20:06.868 "process_max_bandwidth_mb_sec": 0 00:20:06.868 } 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "method": "bdev_iscsi_set_options", 00:20:06.868 "params": { 00:20:06.868 "timeout_sec": 30 00:20:06.868 } 00:20:06.868 }, 00:20:06.868 { 00:20:06.868 "method": "bdev_nvme_set_options", 00:20:06.868 "params": { 00:20:06.868 "action_on_timeout": "none", 00:20:06.868 "timeout_us": 0, 00:20:06.868 "timeout_admin_us": 0, 00:20:06.868 "keep_alive_timeout_ms": 10000, 00:20:06.868 "arbitration_burst": 0, 00:20:06.869 "low_priority_weight": 0, 00:20:06.869 "medium_priority_weight": 0, 00:20:06.869 "high_priority_weight": 0, 00:20:06.869 "nvme_adminq_poll_period_us": 10000, 00:20:06.869 "nvme_ioq_poll_period_us": 0, 00:20:06.869 "io_queue_requests": 0, 00:20:06.869 "delay_cmd_submit": true, 00:20:06.869 "transport_retry_count": 4, 00:20:06.869 "bdev_retry_count": 3, 00:20:06.869 "transport_ack_timeout": 0, 00:20:06.869 "ctrlr_loss_timeout_sec": 0, 00:20:06.869 "reconnect_delay_sec": 0, 00:20:06.869 "fast_io_fail_timeout_sec": 0, 00:20:06.869 "disable_auto_failback": false, 00:20:06.869 "generate_uuids": false, 00:20:06.869 "transport_tos": 0, 00:20:06.869 "nvme_error_stat": false, 00:20:06.869 "rdma_srq_size": 0, 00:20:06.869 "io_path_stat": false, 00:20:06.869 "allow_accel_sequence": false, 00:20:06.869 "rdma_max_cq_size": 0, 00:20:06.869 "rdma_cm_event_timeout_ms": 0, 00:20:06.869 "dhchap_digests": [ 00:20:06.869 "sha256", 00:20:06.869 "sha384", 00:20:06.869 "sha512" 00:20:06.869 ], 00:20:06.869 "dhchap_dhgroups": [ 00:20:06.869 "null", 00:20:06.869 "ffdhe2048", 00:20:06.869 "ffdhe3072", 00:20:06.869 "ffdhe4096", 00:20:06.869 "ffdhe6144", 00:20:06.869 "ffdhe8192" 00:20:06.869 ] 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "bdev_nvme_set_hotplug", 00:20:06.869 "params": { 00:20:06.869 "period_us": 100000, 00:20:06.869 "enable": false 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "bdev_malloc_create", 00:20:06.869 "params": { 00:20:06.869 "name": "malloc0", 00:20:06.869 "num_blocks": 8192, 00:20:06.869 "block_size": 4096, 00:20:06.869 "physical_block_size": 4096, 00:20:06.869 "uuid": "12094eda-2fef-41bc-ac1f-58b10245b1e1", 00:20:06.869 "optimal_io_boundary": 0, 00:20:06.869 "md_size": 0, 00:20:06.869 "dif_type": 0, 00:20:06.869 "dif_is_head_of_md": false, 00:20:06.869 "dif_pi_format": 0 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "bdev_wait_for_examine" 00:20:06.869 } 00:20:06.869 ] 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "subsystem": "nbd", 00:20:06.869 "config": [] 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "subsystem": "scheduler", 00:20:06.869 "config": [ 00:20:06.869 { 00:20:06.869 "method": "framework_set_scheduler", 00:20:06.869 "params": { 00:20:06.869 "name": "static" 00:20:06.869 } 00:20:06.869 } 00:20:06.869 ] 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "subsystem": "nvmf", 00:20:06.869 "config": [ 00:20:06.869 { 00:20:06.869 "method": "nvmf_set_config", 00:20:06.869 "params": { 00:20:06.869 "discovery_filter": "match_any", 00:20:06.869 "admin_cmd_passthru": { 00:20:06.869 "identify_ctrlr": false 00:20:06.869 }, 00:20:06.869 "dhchap_digests": [ 00:20:06.869 "sha256", 00:20:06.869 "sha384", 00:20:06.869 "sha512" 00:20:06.869 ], 00:20:06.869 "dhchap_dhgroups": [ 00:20:06.869 "null", 00:20:06.869 "ffdhe2048", 00:20:06.869 "ffdhe3072", 00:20:06.869 "ffdhe4096", 00:20:06.869 "ffdhe6144", 00:20:06.869 "ffdhe8192" 00:20:06.869 ] 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "nvmf_set_max_subsystems", 00:20:06.869 "params": { 00:20:06.869 "max_subsystems": 1024 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "nvmf_set_crdt", 00:20:06.869 "params": { 00:20:06.869 "crdt1": 0, 00:20:06.869 "crdt2": 0, 00:20:06.869 "crdt3": 0 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "nvmf_create_transport", 00:20:06.869 "params": { 00:20:06.869 "trtype": "TCP", 00:20:06.869 "max_queue_depth": 128, 00:20:06.869 "max_io_qpairs_per_ctrlr": 127, 00:20:06.869 "in_capsule_data_size": 4096, 00:20:06.869 "max_io_size": 131072, 00:20:06.869 "io_unit_size": 131072, 00:20:06.869 "max_aq_depth": 128, 00:20:06.869 "num_shared_buffers": 511, 00:20:06.869 "buf_cache_size": 4294967295, 00:20:06.869 "dif_insert_or_strip": false, 00:20:06.869 "zcopy": false, 00:20:06.869 "c2h_success": false, 00:20:06.869 "sock_priority": 0, 00:20:06.869 "abort_timeout_sec": 1, 00:20:06.869 "ack_timeout": 0, 00:20:06.869 "data_wr_pool_size": 0 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "nvmf_create_subsystem", 00:20:06.869 "params": { 00:20:06.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.869 "allow_any_host": false, 00:20:06.869 "serial_number": "SPDK00000000000001", 00:20:06.869 "model_number": "SPDK bdev Controller", 00:20:06.869 "max_namespaces": 10, 00:20:06.869 "min_cntlid": 1, 00:20:06.869 "max_cntlid": 65519, 00:20:06.869 "ana_reporting": false 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "nvmf_subsystem_add_host", 00:20:06.869 "params": { 00:20:06.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.869 "host": "nqn.2016-06.io.spdk:host1", 00:20:06.869 "psk": "key0" 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "nvmf_subsystem_add_ns", 00:20:06.869 "params": { 00:20:06.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.869 "namespace": { 00:20:06.869 "nsid": 1, 00:20:06.869 "bdev_name": "malloc0", 00:20:06.869 "nguid": "12094EDA2FEF41BCAC1F58B10245B1E1", 00:20:06.869 "uuid": "12094eda-2fef-41bc-ac1f-58b10245b1e1", 00:20:06.869 "no_auto_visible": false 00:20:06.869 } 00:20:06.869 } 00:20:06.869 }, 00:20:06.869 { 00:20:06.869 "method": "nvmf_subsystem_add_listener", 00:20:06.869 "params": { 00:20:06.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.869 "listen_address": { 00:20:06.869 "trtype": "TCP", 00:20:06.869 "adrfam": "IPv4", 00:20:06.869 "traddr": "10.0.0.2", 00:20:06.869 "trsvcid": "4420" 00:20:06.869 }, 00:20:06.869 "secure_channel": true 00:20:06.869 } 00:20:06.869 } 00:20:06.869 ] 00:20:06.869 } 00:20:06.869 ] 00:20:06.869 }' 00:20:06.869 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:07.130 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:07.130 "subsystems": [ 00:20:07.130 { 00:20:07.130 "subsystem": "keyring", 00:20:07.130 "config": [ 00:20:07.130 { 00:20:07.130 "method": "keyring_file_add_key", 00:20:07.130 "params": { 00:20:07.130 "name": "key0", 00:20:07.130 "path": "/tmp/tmp.ao4i7Bfgcc" 00:20:07.130 } 00:20:07.130 } 00:20:07.130 ] 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "subsystem": "iobuf", 00:20:07.130 "config": [ 00:20:07.130 { 00:20:07.130 "method": "iobuf_set_options", 00:20:07.130 "params": { 00:20:07.130 "small_pool_count": 8192, 00:20:07.130 "large_pool_count": 1024, 00:20:07.130 "small_bufsize": 8192, 00:20:07.130 "large_bufsize": 135168 00:20:07.130 } 00:20:07.130 } 00:20:07.130 ] 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "subsystem": "sock", 00:20:07.130 "config": [ 00:20:07.130 { 00:20:07.130 "method": "sock_set_default_impl", 00:20:07.130 "params": { 00:20:07.130 "impl_name": "posix" 00:20:07.130 } 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "method": "sock_impl_set_options", 00:20:07.130 "params": { 00:20:07.130 "impl_name": "ssl", 00:20:07.130 "recv_buf_size": 4096, 00:20:07.130 "send_buf_size": 4096, 00:20:07.130 "enable_recv_pipe": true, 00:20:07.130 "enable_quickack": false, 00:20:07.130 "enable_placement_id": 0, 00:20:07.130 "enable_zerocopy_send_server": true, 00:20:07.130 "enable_zerocopy_send_client": false, 00:20:07.130 "zerocopy_threshold": 0, 00:20:07.130 "tls_version": 0, 00:20:07.130 "enable_ktls": false 00:20:07.130 } 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "method": "sock_impl_set_options", 00:20:07.130 "params": { 00:20:07.130 "impl_name": "posix", 00:20:07.130 "recv_buf_size": 2097152, 00:20:07.130 "send_buf_size": 2097152, 00:20:07.130 "enable_recv_pipe": true, 00:20:07.130 "enable_quickack": false, 00:20:07.130 "enable_placement_id": 0, 00:20:07.130 "enable_zerocopy_send_server": true, 00:20:07.130 "enable_zerocopy_send_client": false, 00:20:07.130 "zerocopy_threshold": 0, 00:20:07.130 "tls_version": 0, 00:20:07.130 "enable_ktls": false 00:20:07.130 } 00:20:07.130 } 00:20:07.130 ] 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "subsystem": "vmd", 00:20:07.130 "config": [] 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "subsystem": "accel", 00:20:07.130 "config": [ 00:20:07.130 { 00:20:07.130 "method": "accel_set_options", 00:20:07.130 "params": { 00:20:07.130 "small_cache_size": 128, 00:20:07.130 "large_cache_size": 16, 00:20:07.130 "task_count": 2048, 00:20:07.130 "sequence_count": 2048, 00:20:07.130 "buf_count": 2048 00:20:07.130 } 00:20:07.130 } 00:20:07.130 ] 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "subsystem": "bdev", 00:20:07.130 "config": [ 00:20:07.130 { 00:20:07.130 "method": "bdev_set_options", 00:20:07.130 "params": { 00:20:07.130 "bdev_io_pool_size": 65535, 00:20:07.130 "bdev_io_cache_size": 256, 00:20:07.130 "bdev_auto_examine": true, 00:20:07.130 "iobuf_small_cache_size": 128, 00:20:07.130 "iobuf_large_cache_size": 16 00:20:07.130 } 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "method": "bdev_raid_set_options", 00:20:07.130 "params": { 00:20:07.130 "process_window_size_kb": 1024, 00:20:07.130 "process_max_bandwidth_mb_sec": 0 00:20:07.130 } 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "method": "bdev_iscsi_set_options", 00:20:07.130 "params": { 00:20:07.130 "timeout_sec": 30 00:20:07.130 } 00:20:07.130 }, 00:20:07.130 { 00:20:07.130 "method": "bdev_nvme_set_options", 00:20:07.130 "params": { 00:20:07.130 "action_on_timeout": "none", 00:20:07.130 "timeout_us": 0, 00:20:07.130 "timeout_admin_us": 0, 00:20:07.130 "keep_alive_timeout_ms": 10000, 00:20:07.130 "arbitration_burst": 0, 00:20:07.130 "low_priority_weight": 0, 00:20:07.130 "medium_priority_weight": 0, 00:20:07.130 "high_priority_weight": 0, 00:20:07.130 "nvme_adminq_poll_period_us": 10000, 00:20:07.130 "nvme_ioq_poll_period_us": 0, 00:20:07.130 "io_queue_requests": 512, 00:20:07.130 "delay_cmd_submit": true, 00:20:07.130 "transport_retry_count": 4, 00:20:07.130 "bdev_retry_count": 3, 00:20:07.130 "transport_ack_timeout": 0, 00:20:07.130 "ctrlr_loss_timeout_sec": 0, 00:20:07.130 "reconnect_delay_sec": 0, 00:20:07.130 "fast_io_fail_timeout_sec": 0, 00:20:07.130 "disable_auto_failback": false, 00:20:07.130 "generate_uuids": false, 00:20:07.130 "transport_tos": 0, 00:20:07.130 "nvme_error_stat": false, 00:20:07.130 "rdma_srq_size": 0, 00:20:07.130 "io_path_stat": false, 00:20:07.130 "allow_accel_sequence": false, 00:20:07.130 "rdma_max_cq_size": 0, 00:20:07.130 "rdma_cm_event_timeout_ms": 0, 00:20:07.130 "dhchap_digests": [ 00:20:07.130 "sha256", 00:20:07.130 "sha384", 00:20:07.130 "sha512" 00:20:07.130 ], 00:20:07.130 "dhchap_dhgroups": [ 00:20:07.130 "null", 00:20:07.131 "ffdhe2048", 00:20:07.131 "ffdhe3072", 00:20:07.131 "ffdhe4096", 00:20:07.131 "ffdhe6144", 00:20:07.131 "ffdhe8192" 00:20:07.131 ] 00:20:07.131 } 00:20:07.131 }, 00:20:07.131 { 00:20:07.131 "method": "bdev_nvme_attach_controller", 00:20:07.131 "params": { 00:20:07.131 "name": "TLSTEST", 00:20:07.131 "trtype": "TCP", 00:20:07.131 "adrfam": "IPv4", 00:20:07.131 "traddr": "10.0.0.2", 00:20:07.131 "trsvcid": "4420", 00:20:07.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.131 "prchk_reftag": false, 00:20:07.131 "prchk_guard": false, 00:20:07.131 "ctrlr_loss_timeout_sec": 0, 00:20:07.131 "reconnect_delay_sec": 0, 00:20:07.131 "fast_io_fail_timeout_sec": 0, 00:20:07.131 "psk": "key0", 00:20:07.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.131 "hdgst": false, 00:20:07.131 "ddgst": false, 00:20:07.131 "multipath": "multipath" 00:20:07.131 } 00:20:07.131 }, 00:20:07.131 { 00:20:07.131 "method": "bdev_nvme_set_hotplug", 00:20:07.131 "params": { 00:20:07.131 "period_us": 100000, 00:20:07.131 "enable": false 00:20:07.131 } 00:20:07.131 }, 00:20:07.131 { 00:20:07.131 "method": "bdev_wait_for_examine" 00:20:07.131 } 00:20:07.131 ] 00:20:07.131 }, 00:20:07.131 { 00:20:07.131 "subsystem": "nbd", 00:20:07.131 "config": [] 00:20:07.131 } 00:20:07.131 ] 00:20:07.131 }' 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3148562 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3148562 ']' 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3148562 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148562 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148562' 00:20:07.131 killing process with pid 3148562 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3148562 00:20:07.131 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.131 00:20:07.131 Latency(us) 00:20:07.131 [2024-10-16T05:03:06.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.131 [2024-10-16T05:03:06.630Z] =================================================================================================================== 00:20:07.131 [2024-10-16T05:03:06.630Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.131 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3148562 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3148092 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3148092 ']' 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3148092 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148092 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148092' 00:20:07.392 killing process with pid 3148092 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3148092 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3148092 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.392 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:07.392 "subsystems": [ 00:20:07.392 { 00:20:07.392 "subsystem": "keyring", 00:20:07.392 "config": [ 00:20:07.392 { 00:20:07.392 "method": "keyring_file_add_key", 00:20:07.392 "params": { 00:20:07.392 "name": "key0", 00:20:07.392 "path": "/tmp/tmp.ao4i7Bfgcc" 00:20:07.392 } 00:20:07.392 } 00:20:07.392 ] 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "subsystem": "iobuf", 00:20:07.392 "config": [ 00:20:07.392 { 00:20:07.392 "method": "iobuf_set_options", 00:20:07.392 "params": { 00:20:07.392 "small_pool_count": 8192, 00:20:07.392 "large_pool_count": 1024, 00:20:07.392 "small_bufsize": 8192, 00:20:07.392 "large_bufsize": 135168 00:20:07.392 } 00:20:07.392 } 00:20:07.392 ] 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "subsystem": "sock", 00:20:07.392 "config": [ 00:20:07.392 { 00:20:07.392 "method": "sock_set_default_impl", 00:20:07.392 "params": { 00:20:07.392 "impl_name": "posix" 00:20:07.392 } 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "method": "sock_impl_set_options", 00:20:07.392 "params": { 00:20:07.392 "impl_name": "ssl", 00:20:07.392 "recv_buf_size": 4096, 00:20:07.392 "send_buf_size": 4096, 00:20:07.392 "enable_recv_pipe": true, 00:20:07.392 "enable_quickack": false, 00:20:07.392 "enable_placement_id": 0, 00:20:07.392 "enable_zerocopy_send_server": true, 00:20:07.392 "enable_zerocopy_send_client": false, 00:20:07.392 "zerocopy_threshold": 0, 00:20:07.392 "tls_version": 0, 00:20:07.392 "enable_ktls": false 00:20:07.392 } 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "method": "sock_impl_set_options", 00:20:07.392 "params": { 00:20:07.392 "impl_name": "posix", 00:20:07.392 "recv_buf_size": 2097152, 00:20:07.392 "send_buf_size": 2097152, 00:20:07.392 "enable_recv_pipe": true, 00:20:07.392 "enable_quickack": false, 00:20:07.392 "enable_placement_id": 0, 00:20:07.392 "enable_zerocopy_send_server": true, 00:20:07.392 "enable_zerocopy_send_client": false, 00:20:07.392 "zerocopy_threshold": 0, 00:20:07.392 "tls_version": 0, 00:20:07.392 "enable_ktls": false 00:20:07.392 } 00:20:07.392 } 00:20:07.392 ] 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "subsystem": "vmd", 00:20:07.392 "config": [] 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "subsystem": "accel", 00:20:07.392 "config": [ 00:20:07.392 { 00:20:07.392 "method": "accel_set_options", 00:20:07.392 "params": { 00:20:07.392 "small_cache_size": 128, 00:20:07.392 "large_cache_size": 16, 00:20:07.392 "task_count": 2048, 00:20:07.392 "sequence_count": 2048, 00:20:07.392 "buf_count": 2048 00:20:07.392 } 00:20:07.392 } 00:20:07.392 ] 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "subsystem": "bdev", 00:20:07.392 "config": [ 00:20:07.392 { 00:20:07.392 "method": "bdev_set_options", 00:20:07.392 "params": { 00:20:07.392 "bdev_io_pool_size": 65535, 00:20:07.392 "bdev_io_cache_size": 256, 00:20:07.392 "bdev_auto_examine": true, 00:20:07.392 "iobuf_small_cache_size": 128, 00:20:07.392 "iobuf_large_cache_size": 16 00:20:07.392 } 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "method": "bdev_raid_set_options", 00:20:07.392 "params": { 00:20:07.392 "process_window_size_kb": 1024, 00:20:07.392 "process_max_bandwidth_mb_sec": 0 00:20:07.392 } 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "method": "bdev_iscsi_set_options", 00:20:07.392 "params": { 00:20:07.392 "timeout_sec": 30 00:20:07.392 } 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "method": "bdev_nvme_set_options", 00:20:07.392 "params": { 00:20:07.392 "action_on_timeout": "none", 00:20:07.392 "timeout_us": 0, 00:20:07.392 "timeout_admin_us": 0, 00:20:07.392 "keep_alive_timeout_ms": 10000, 00:20:07.392 "arbitration_burst": 0, 00:20:07.392 "low_priority_weight": 0, 00:20:07.392 "medium_priority_weight": 0, 00:20:07.392 "high_priority_weight": 0, 00:20:07.392 "nvme_adminq_poll_period_us": 10000, 00:20:07.392 "nvme_ioq_poll_period_us": 0, 00:20:07.392 "io_queue_requests": 0, 00:20:07.392 "delay_cmd_submit": true, 00:20:07.392 "transport_retry_count": 4, 00:20:07.392 "bdev_retry_count": 3, 00:20:07.392 "transport_ack_timeout": 0, 00:20:07.392 "ctrlr_loss_timeout_sec": 0, 00:20:07.392 "reconnect_delay_sec": 0, 00:20:07.392 "fast_io_fail_timeout_sec": 0, 00:20:07.392 "disable_auto_failback": false, 00:20:07.392 "generate_uuids": false, 00:20:07.392 "transport_tos": 0, 00:20:07.392 "nvme_error_stat": false, 00:20:07.392 "rdma_srq_size": 0, 00:20:07.392 "io_path_stat": false, 00:20:07.392 "allow_accel_sequence": false, 00:20:07.392 "rdma_max_cq_size": 0, 00:20:07.392 "rdma_cm_event_timeout_ms": 0, 00:20:07.392 "dhchap_digests": [ 00:20:07.392 "sha256", 00:20:07.392 "sha384", 00:20:07.392 "sha512" 00:20:07.392 ], 00:20:07.392 "dhchap_dhgroups": [ 00:20:07.392 "null", 00:20:07.392 "ffdhe2048", 00:20:07.392 "ffdhe3072", 00:20:07.392 "ffdhe4096", 00:20:07.392 "ffdhe6144", 00:20:07.392 "ffdhe8192" 00:20:07.392 ] 00:20:07.392 } 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "method": "bdev_nvme_set_hotplug", 00:20:07.392 "params": { 00:20:07.392 "period_us": 100000, 00:20:07.392 "enable": false 00:20:07.392 } 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "method": "bdev_malloc_create", 00:20:07.392 "params": { 00:20:07.392 "name": "malloc0", 00:20:07.392 "num_blocks": 8192, 00:20:07.392 "block_size": 4096, 00:20:07.392 "physical_block_size": 4096, 00:20:07.392 "uuid": "12094eda-2fef-41bc-ac1f-58b10245b1e1", 00:20:07.392 "optimal_io_boundary": 0, 00:20:07.392 "md_size": 0, 00:20:07.392 "dif_type": 0, 00:20:07.392 "dif_is_head_of_md": false, 00:20:07.392 "dif_pi_format": 0 00:20:07.392 } 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "method": "bdev_wait_for_examine" 00:20:07.392 } 00:20:07.392 ] 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "subsystem": "nbd", 00:20:07.392 "config": [] 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "subsystem": "scheduler", 00:20:07.392 "config": [ 00:20:07.392 { 00:20:07.392 "method": "framework_set_scheduler", 00:20:07.392 "params": { 00:20:07.392 "name": "static" 00:20:07.392 } 00:20:07.392 } 00:20:07.392 ] 00:20:07.392 }, 00:20:07.392 { 00:20:07.392 "subsystem": "nvmf", 00:20:07.392 "config": [ 00:20:07.392 { 00:20:07.392 "method": "nvmf_set_config", 00:20:07.392 "params": { 00:20:07.392 "discovery_filter": "match_any", 00:20:07.393 "admin_cmd_passthru": { 00:20:07.393 "identify_ctrlr": false 00:20:07.393 }, 00:20:07.393 "dhchap_digests": [ 00:20:07.393 "sha256", 00:20:07.393 "sha384", 00:20:07.393 "sha512" 00:20:07.393 ], 00:20:07.393 "dhchap_dhgroups": [ 00:20:07.393 "null", 00:20:07.393 "ffdhe2048", 00:20:07.393 "ffdhe3072", 00:20:07.393 "ffdhe4096", 00:20:07.393 "ffdhe6144", 00:20:07.393 "ffdhe8192" 00:20:07.393 ] 00:20:07.393 } 00:20:07.393 }, 00:20:07.393 { 00:20:07.393 "method": "nvmf_set_max_subsystems", 00:20:07.393 "params": { 00:20:07.393 "max_subsystems": 1024 00:20:07.393 } 00:20:07.393 }, 00:20:07.393 { 00:20:07.393 "method": "nvmf_set_crdt", 00:20:07.393 "params": { 00:20:07.393 "crdt1": 0, 00:20:07.393 "crdt2": 0, 00:20:07.393 "crdt3": 0 00:20:07.393 } 00:20:07.393 }, 00:20:07.393 { 00:20:07.393 "method": "nvmf_create_transport", 00:20:07.393 "params": { 00:20:07.393 "trtype": "TCP", 00:20:07.393 "max_queue_depth": 128, 00:20:07.393 "max_io_qpairs_per_ctrlr": 127, 00:20:07.393 "in_capsule_data_size": 4096, 00:20:07.393 "max_io_size": 131072, 00:20:07.393 "io_unit_size": 131072, 00:20:07.393 "max_aq_depth": 128, 00:20:07.393 "num_shared_buffers": 511, 00:20:07.393 "buf_cache_size": 4294967295, 00:20:07.393 "dif_insert_or_strip": false, 00:20:07.393 "zcopy": false, 00:20:07.393 "c2h_success": false, 00:20:07.393 "sock_priority": 0, 00:20:07.393 "abort_timeout_sec": 1, 00:20:07.393 "ack_timeout": 0, 00:20:07.393 "data_wr_pool_size": 0 00:20:07.393 } 00:20:07.393 }, 00:20:07.393 { 00:20:07.393 "method": "nvmf_create_subsystem", 00:20:07.393 "params": { 00:20:07.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.393 "allow_any_host": false, 00:20:07.393 "serial_number": "SPDK00000000000001", 00:20:07.393 "model_number": "SPDK bdev Controller", 00:20:07.393 "max_namespaces": 10, 00:20:07.393 "min_cntlid": 1, 00:20:07.393 "max_cntlid": 65519, 00:20:07.393 "ana_reporting": false 00:20:07.393 } 00:20:07.393 }, 00:20:07.393 { 00:20:07.393 "method": "nvmf_subsystem_add_host", 00:20:07.393 "params": { 00:20:07.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.393 "host": "nqn.2016-06.io.spdk:host1", 00:20:07.393 "psk": "key0" 00:20:07.393 } 00:20:07.393 }, 00:20:07.393 { 00:20:07.393 "method": "nvmf_subsystem_add_ns", 00:20:07.393 "params": { 00:20:07.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.393 "namespace": { 00:20:07.393 "nsid": 1, 00:20:07.393 "bdev_name": "malloc0", 00:20:07.393 "nguid": "12094EDA2FEF41BCAC1F58B10245B1E1", 00:20:07.393 "uuid": "12094eda-2fef-41bc-ac1f-58b10245b1e1", 00:20:07.393 "no_auto_visible": false 00:20:07.393 } 00:20:07.393 } 00:20:07.393 }, 00:20:07.393 { 00:20:07.393 "method": "nvmf_subsystem_add_listener", 00:20:07.393 "params": { 00:20:07.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.393 "listen_address": { 00:20:07.393 "trtype": "TCP", 00:20:07.393 "adrfam": "IPv4", 00:20:07.393 "traddr": "10.0.0.2", 00:20:07.393 "trsvcid": "4420" 00:20:07.393 }, 00:20:07.393 "secure_channel": true 00:20:07.393 } 00:20:07.393 } 00:20:07.393 ] 00:20:07.393 } 00:20:07.393 ] 00:20:07.393 }' 00:20:07.393 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3148807 00:20:07.393 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3148807 00:20:07.393 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:07.393 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3148807 ']' 00:20:07.393 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.393 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.393 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.393 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.393 07:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.653 [2024-10-16 07:03:06.951810] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:07.653 [2024-10-16 07:03:06.951874] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.653 [2024-10-16 07:03:07.009491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.653 [2024-10-16 07:03:07.037817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.653 [2024-10-16 07:03:07.037850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.653 [2024-10-16 07:03:07.037856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.653 [2024-10-16 07:03:07.037861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.653 [2024-10-16 07:03:07.037865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.653 [2024-10-16 07:03:07.038344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.911 [2024-10-16 07:03:07.230810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.912 [2024-10-16 07:03:07.262834] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.912 [2024-10-16 07:03:07.263048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3149151 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3149151 /var/tmp/bdevperf.sock 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3149151 ']' 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.482 07:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:08.482 "subsystems": [ 00:20:08.482 { 00:20:08.482 "subsystem": "keyring", 00:20:08.482 "config": [ 00:20:08.482 { 00:20:08.482 "method": "keyring_file_add_key", 00:20:08.482 "params": { 00:20:08.482 "name": "key0", 00:20:08.482 "path": "/tmp/tmp.ao4i7Bfgcc" 00:20:08.482 } 00:20:08.482 } 00:20:08.482 ] 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "subsystem": "iobuf", 00:20:08.482 "config": [ 00:20:08.482 { 00:20:08.482 "method": "iobuf_set_options", 00:20:08.482 "params": { 00:20:08.482 "small_pool_count": 8192, 00:20:08.482 "large_pool_count": 1024, 00:20:08.482 "small_bufsize": 8192, 00:20:08.482 "large_bufsize": 135168 00:20:08.482 } 00:20:08.482 } 00:20:08.482 ] 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "subsystem": "sock", 00:20:08.482 "config": [ 00:20:08.482 { 00:20:08.482 "method": "sock_set_default_impl", 00:20:08.482 "params": { 00:20:08.482 "impl_name": "posix" 00:20:08.482 } 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "method": "sock_impl_set_options", 00:20:08.482 "params": { 00:20:08.482 "impl_name": "ssl", 00:20:08.482 "recv_buf_size": 4096, 00:20:08.482 "send_buf_size": 4096, 00:20:08.482 "enable_recv_pipe": true, 00:20:08.482 "enable_quickack": false, 00:20:08.482 "enable_placement_id": 0, 00:20:08.482 "enable_zerocopy_send_server": true, 00:20:08.482 "enable_zerocopy_send_client": false, 00:20:08.482 "zerocopy_threshold": 0, 00:20:08.482 "tls_version": 0, 00:20:08.482 "enable_ktls": false 00:20:08.482 } 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "method": "sock_impl_set_options", 00:20:08.482 "params": { 00:20:08.482 "impl_name": "posix", 00:20:08.482 "recv_buf_size": 2097152, 00:20:08.482 "send_buf_size": 2097152, 00:20:08.482 "enable_recv_pipe": true, 00:20:08.482 "enable_quickack": false, 00:20:08.482 "enable_placement_id": 0, 00:20:08.482 "enable_zerocopy_send_server": true, 00:20:08.482 "enable_zerocopy_send_client": false, 00:20:08.482 "zerocopy_threshold": 0, 00:20:08.482 "tls_version": 0, 00:20:08.482 "enable_ktls": false 00:20:08.482 } 00:20:08.482 } 00:20:08.482 ] 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "subsystem": "vmd", 00:20:08.482 "config": [] 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "subsystem": "accel", 00:20:08.482 "config": [ 00:20:08.482 { 00:20:08.482 "method": "accel_set_options", 00:20:08.482 "params": { 00:20:08.482 "small_cache_size": 128, 00:20:08.482 "large_cache_size": 16, 00:20:08.482 "task_count": 2048, 00:20:08.482 "sequence_count": 2048, 00:20:08.482 "buf_count": 2048 00:20:08.482 } 00:20:08.482 } 00:20:08.482 ] 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "subsystem": "bdev", 00:20:08.482 "config": [ 00:20:08.482 { 00:20:08.482 "method": "bdev_set_options", 00:20:08.482 "params": { 00:20:08.482 "bdev_io_pool_size": 65535, 00:20:08.482 "bdev_io_cache_size": 256, 00:20:08.482 "bdev_auto_examine": true, 00:20:08.482 "iobuf_small_cache_size": 128, 00:20:08.482 "iobuf_large_cache_size": 16 00:20:08.482 } 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "method": "bdev_raid_set_options", 00:20:08.482 "params": { 00:20:08.482 "process_window_size_kb": 1024, 00:20:08.482 "process_max_bandwidth_mb_sec": 0 00:20:08.482 } 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "method": "bdev_iscsi_set_options", 00:20:08.482 "params": { 00:20:08.482 "timeout_sec": 30 00:20:08.482 } 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "method": "bdev_nvme_set_options", 00:20:08.482 "params": { 00:20:08.482 "action_on_timeout": "none", 00:20:08.482 "timeout_us": 0, 00:20:08.482 "timeout_admin_us": 0, 00:20:08.482 "keep_alive_timeout_ms": 10000, 00:20:08.482 "arbitration_burst": 0, 00:20:08.482 "low_priority_weight": 0, 00:20:08.482 "medium_priority_weight": 0, 00:20:08.482 "high_priority_weight": 0, 00:20:08.482 "nvme_adminq_poll_period_us": 10000, 00:20:08.482 "nvme_ioq_poll_period_us": 0, 00:20:08.482 "io_queue_requests": 512, 00:20:08.482 "delay_cmd_submit": true, 00:20:08.482 "transport_retry_count": 4, 00:20:08.482 "bdev_retry_count": 3, 00:20:08.482 "transport_ack_timeout": 0, 00:20:08.482 "ctrlr_loss_timeout_sec": 0, 00:20:08.482 "reconnect_delay_sec": 0, 00:20:08.482 "fast_io_fail_timeout_sec": 0, 00:20:08.482 "disable_auto_failback": false, 00:20:08.482 "generate_uuids": false, 00:20:08.482 "transport_tos": 0, 00:20:08.482 "nvme_error_stat": false, 00:20:08.482 "rdma_srq_size": 0, 00:20:08.482 "io_path_stat": false, 00:20:08.482 "allow_accel_sequence": false, 00:20:08.482 "rdma_max_cq_size": 0, 00:20:08.482 "rdma_cm_event_timeout_ms": 0, 00:20:08.482 "dhchap_digests": [ 00:20:08.482 "sha256", 00:20:08.482 "sha384", 00:20:08.482 "sha512" 00:20:08.482 ], 00:20:08.482 "dhchap_dhgroups": [ 00:20:08.482 "null", 00:20:08.482 "ffdhe2048", 00:20:08.482 "ffdhe3072", 00:20:08.482 "ffdhe4096", 00:20:08.482 "ffdhe6144", 00:20:08.482 "ffdhe8192" 00:20:08.482 ] 00:20:08.482 } 00:20:08.482 }, 00:20:08.482 { 00:20:08.482 "method": "bdev_nvme_attach_controller", 00:20:08.482 "params": { 00:20:08.482 "name": "TLSTEST", 00:20:08.482 "trtype": "TCP", 00:20:08.482 "adrfam": "IPv4", 00:20:08.482 "traddr": "10.0.0.2", 00:20:08.482 "trsvcid": "4420", 00:20:08.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.482 "prchk_reftag": false, 00:20:08.482 "prchk_guard": false, 00:20:08.482 "ctrlr_loss_timeout_sec": 0, 00:20:08.482 "reconnect_delay_sec": 0, 00:20:08.483 "fast_io_fail_timeout_sec": 0, 00:20:08.483 "psk": "key0", 00:20:08.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.483 "hdgst": false, 00:20:08.483 "ddgst": false, 00:20:08.483 "multipath": "multipath" 00:20:08.483 } 00:20:08.483 }, 00:20:08.483 { 00:20:08.483 "method": "bdev_nvme_set_hotplug", 00:20:08.483 "params": { 00:20:08.483 "period_us": 100000, 00:20:08.483 "enable": false 00:20:08.483 } 00:20:08.483 }, 00:20:08.483 { 00:20:08.483 "method": "bdev_wait_for_examine" 00:20:08.483 } 00:20:08.483 ] 00:20:08.483 }, 00:20:08.483 { 00:20:08.483 "subsystem": "nbd", 00:20:08.483 "config": [] 00:20:08.483 } 00:20:08.483 ] 00:20:08.483 }' 00:20:08.483 [2024-10-16 07:03:07.846556] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:08.483 [2024-10-16 07:03:07.846612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3149151 ] 00:20:08.483 [2024-10-16 07:03:07.922047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.483 [2024-10-16 07:03:07.950914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.742 [2024-10-16 07:03:08.084352] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.312 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.312 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:09.312 07:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:09.312 Running I/O for 10 seconds... 00:20:11.633 5149.00 IOPS, 20.11 MiB/s [2024-10-16T05:03:12.073Z] 5262.00 IOPS, 20.55 MiB/s [2024-10-16T05:03:13.014Z] 5369.33 IOPS, 20.97 MiB/s [2024-10-16T05:03:13.955Z] 5589.00 IOPS, 21.83 MiB/s [2024-10-16T05:03:14.896Z] 5646.40 IOPS, 22.06 MiB/s [2024-10-16T05:03:15.838Z] 5596.00 IOPS, 21.86 MiB/s [2024-10-16T05:03:16.780Z] 5667.57 IOPS, 22.14 MiB/s [2024-10-16T05:03:18.164Z] 5696.25 IOPS, 22.25 MiB/s [2024-10-16T05:03:19.107Z] 5609.11 IOPS, 21.91 MiB/s [2024-10-16T05:03:19.107Z] 5590.00 IOPS, 21.84 MiB/s 00:20:19.608 Latency(us) 00:20:19.608 [2024-10-16T05:03:19.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.608 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.608 Verification LBA range: start 0x0 length 0x2000 00:20:19.608 TLSTESTn1 : 10.02 5591.60 21.84 0.00 0.00 22856.64 5843.63 76458.67 00:20:19.608 [2024-10-16T05:03:19.107Z] =================================================================================================================== 00:20:19.608 [2024-10-16T05:03:19.107Z] Total : 5591.60 21.84 0.00 0.00 22856.64 5843.63 76458.67 00:20:19.608 { 00:20:19.608 "results": [ 00:20:19.608 { 00:20:19.608 "job": "TLSTESTn1", 00:20:19.608 "core_mask": "0x4", 00:20:19.608 "workload": "verify", 00:20:19.608 "status": "finished", 00:20:19.608 "verify_range": { 00:20:19.608 "start": 0, 00:20:19.608 "length": 8192 00:20:19.608 }, 00:20:19.608 "queue_depth": 128, 00:20:19.608 "io_size": 4096, 00:20:19.608 "runtime": 10.020037, 00:20:19.608 "iops": 5591.596118856647, 00:20:19.608 "mibps": 21.842172339283778, 00:20:19.608 "io_failed": 0, 00:20:19.608 "io_timeout": 0, 00:20:19.608 "avg_latency_us": 22856.644759525, 00:20:19.608 "min_latency_us": 5843.626666666667, 00:20:19.608 "max_latency_us": 76458.66666666667 00:20:19.608 } 00:20:19.608 ], 00:20:19.608 "core_count": 1 00:20:19.608 } 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3149151 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3149151 ']' 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3149151 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3149151 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3149151' 00:20:19.608 killing process with pid 3149151 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3149151 00:20:19.608 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.608 00:20:19.608 Latency(us) 00:20:19.608 [2024-10-16T05:03:19.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.608 [2024-10-16T05:03:19.107Z] =================================================================================================================== 00:20:19.608 [2024-10-16T05:03:19.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3149151 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3148807 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3148807 ']' 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3148807 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.608 07:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148807 00:20:19.608 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:19.608 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:19.608 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148807' 00:20:19.608 killing process with pid 3148807 00:20:19.608 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3148807 00:20:19.608 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3148807 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3151659 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3151659 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3151659 ']' 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:19.870 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.870 [2024-10-16 07:03:19.191203] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:19.870 [2024-10-16 07:03:19.191268] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.870 [2024-10-16 07:03:19.274199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.870 [2024-10-16 07:03:19.320982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.870 [2024-10-16 07:03:19.321036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.870 [2024-10-16 07:03:19.321044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.870 [2024-10-16 07:03:19.321057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.870 [2024-10-16 07:03:19.321064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.870 [2024-10-16 07:03:19.321829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.814 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.814 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:20.814 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:20.814 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.814 07:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.814 07:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.814 07:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ao4i7Bfgcc 00:20:20.814 07:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ao4i7Bfgcc 00:20:20.814 07:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:20.814 [2024-10-16 07:03:20.191194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.814 07:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:21.075 07:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:21.335 [2024-10-16 07:03:20.584158] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.335 [2024-10-16 07:03:20.584519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.335 07:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:21.335 malloc0 00:20:21.335 07:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:21.596 07:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ao4i7Bfgcc 00:20:21.857 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:22.117 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3152253 00:20:22.117 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.117 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:22.117 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3152253 /var/tmp/bdevperf.sock 00:20:22.117 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3152253 ']' 00:20:22.117 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.117 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.117 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.117 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.118 07:03:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.118 [2024-10-16 07:03:21.435644] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:22.118 [2024-10-16 07:03:21.435717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152253 ] 00:20:22.118 [2024-10-16 07:03:21.517290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.118 [2024-10-16 07:03:21.553088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.058 07:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.058 07:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:23.058 07:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ao4i7Bfgcc 00:20:23.058 07:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:23.319 [2024-10-16 07:03:22.567487] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.319 nvme0n1 00:20:23.319 07:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.319 Running I/O for 1 seconds... 00:20:24.519 4581.00 IOPS, 17.89 MiB/s 00:20:24.519 Latency(us) 00:20:24.519 [2024-10-16T05:03:24.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.519 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:24.519 Verification LBA range: start 0x0 length 0x2000 00:20:24.519 nvme0n1 : 1.02 4639.04 18.12 0.00 0.00 27407.61 6335.15 39321.60 00:20:24.519 [2024-10-16T05:03:24.018Z] =================================================================================================================== 00:20:24.519 [2024-10-16T05:03:24.018Z] Total : 4639.04 18.12 0.00 0.00 27407.61 6335.15 39321.60 00:20:24.519 { 00:20:24.519 "results": [ 00:20:24.519 { 00:20:24.519 "job": "nvme0n1", 00:20:24.519 "core_mask": "0x2", 00:20:24.519 "workload": "verify", 00:20:24.519 "status": "finished", 00:20:24.519 "verify_range": { 00:20:24.519 "start": 0, 00:20:24.519 "length": 8192 00:20:24.519 }, 00:20:24.519 "queue_depth": 128, 00:20:24.519 "io_size": 4096, 00:20:24.519 "runtime": 1.01508, 00:20:24.519 "iops": 4639.043228119951, 00:20:24.519 "mibps": 18.12126260984356, 00:20:24.519 "io_failed": 0, 00:20:24.519 "io_timeout": 0, 00:20:24.519 "avg_latency_us": 27407.606812486727, 00:20:24.519 "min_latency_us": 6335.1466666666665, 00:20:24.519 "max_latency_us": 39321.6 00:20:24.519 } 00:20:24.519 ], 00:20:24.519 "core_count": 1 00:20:24.519 } 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3152253 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3152253 ']' 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3152253 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3152253 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3152253' 00:20:24.519 killing process with pid 3152253 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3152253 00:20:24.519 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.519 00:20:24.519 Latency(us) 00:20:24.519 [2024-10-16T05:03:24.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.519 [2024-10-16T05:03:24.018Z] =================================================================================================================== 00:20:24.519 [2024-10-16T05:03:24.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3152253 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3151659 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3151659 ']' 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3151659 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.519 07:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3151659 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3151659' 00:20:24.780 killing process with pid 3151659 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3151659 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3151659 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3152681 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3152681 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3152681 ']' 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.780 07:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.780 [2024-10-16 07:03:24.218836] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:24.780 [2024-10-16 07:03:24.218894] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.041 [2024-10-16 07:03:24.301651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.041 [2024-10-16 07:03:24.335866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.041 [2024-10-16 07:03:24.335902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.041 [2024-10-16 07:03:24.335914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.041 [2024-10-16 07:03:24.335920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.041 [2024-10-16 07:03:24.335926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.041 [2024-10-16 07:03:24.336520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.613 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.613 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:25.613 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:25.613 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.613 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.613 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.613 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:25.613 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.613 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.613 [2024-10-16 07:03:25.081820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.613 malloc0 00:20:25.613 [2024-10-16 07:03:25.111948] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.613 [2024-10-16 07:03:25.112293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3153025 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3153025 /var/tmp/bdevperf.sock 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3153025 ']' 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:25.874 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.874 [2024-10-16 07:03:25.202877] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:25.874 [2024-10-16 07:03:25.202941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153025 ] 00:20:25.874 [2024-10-16 07:03:25.282851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.874 [2024-10-16 07:03:25.317939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.814 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.815 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:26.815 07:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ao4i7Bfgcc 00:20:26.815 07:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:26.815 [2024-10-16 07:03:26.311780] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.075 nvme0n1 00:20:27.075 07:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.075 Running I/O for 1 seconds... 00:20:28.016 5447.00 IOPS, 21.28 MiB/s 00:20:28.016 Latency(us) 00:20:28.016 [2024-10-16T05:03:27.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.016 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:28.016 Verification LBA range: start 0x0 length 0x2000 00:20:28.016 nvme0n1 : 1.02 5481.24 21.41 0.00 0.00 23170.19 4232.53 29928.11 00:20:28.016 [2024-10-16T05:03:27.515Z] =================================================================================================================== 00:20:28.016 [2024-10-16T05:03:27.515Z] Total : 5481.24 21.41 0.00 0.00 23170.19 4232.53 29928.11 00:20:28.277 { 00:20:28.277 "results": [ 00:20:28.277 { 00:20:28.277 "job": "nvme0n1", 00:20:28.277 "core_mask": "0x2", 00:20:28.277 "workload": "verify", 00:20:28.277 "status": "finished", 00:20:28.277 "verify_range": { 00:20:28.277 "start": 0, 00:20:28.277 "length": 8192 00:20:28.277 }, 00:20:28.277 "queue_depth": 128, 00:20:28.277 "io_size": 4096, 00:20:28.277 "runtime": 1.017288, 00:20:28.277 "iops": 5481.2403173929115, 00:20:28.277 "mibps": 21.41109498981606, 00:20:28.277 "io_failed": 0, 00:20:28.277 "io_timeout": 0, 00:20:28.277 "avg_latency_us": 23170.19393591583, 00:20:28.277 "min_latency_us": 4232.533333333334, 00:20:28.277 "max_latency_us": 29928.106666666667 00:20:28.277 } 00:20:28.277 ], 00:20:28.277 "core_count": 1 00:20:28.277 } 00:20:28.277 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:28.277 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.277 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.277 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.277 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:28.277 "subsystems": [ 00:20:28.277 { 00:20:28.277 "subsystem": "keyring", 00:20:28.277 "config": [ 00:20:28.277 { 00:20:28.277 "method": "keyring_file_add_key", 00:20:28.277 "params": { 00:20:28.277 "name": "key0", 00:20:28.277 "path": "/tmp/tmp.ao4i7Bfgcc" 00:20:28.277 } 00:20:28.277 } 00:20:28.277 ] 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "subsystem": "iobuf", 00:20:28.277 "config": [ 00:20:28.277 { 00:20:28.277 "method": "iobuf_set_options", 00:20:28.277 "params": { 00:20:28.277 "small_pool_count": 8192, 00:20:28.277 "large_pool_count": 1024, 00:20:28.277 "small_bufsize": 8192, 00:20:28.277 "large_bufsize": 135168 00:20:28.277 } 00:20:28.277 } 00:20:28.277 ] 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "subsystem": "sock", 00:20:28.277 "config": [ 00:20:28.277 { 00:20:28.277 "method": "sock_set_default_impl", 00:20:28.277 "params": { 00:20:28.277 "impl_name": "posix" 00:20:28.277 } 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "method": "sock_impl_set_options", 00:20:28.277 "params": { 00:20:28.277 "impl_name": "ssl", 00:20:28.277 "recv_buf_size": 4096, 00:20:28.277 "send_buf_size": 4096, 00:20:28.277 "enable_recv_pipe": true, 00:20:28.277 "enable_quickack": false, 00:20:28.277 "enable_placement_id": 0, 00:20:28.277 "enable_zerocopy_send_server": true, 00:20:28.277 "enable_zerocopy_send_client": false, 00:20:28.277 "zerocopy_threshold": 0, 00:20:28.277 "tls_version": 0, 00:20:28.277 "enable_ktls": false 00:20:28.277 } 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "method": "sock_impl_set_options", 00:20:28.277 "params": { 00:20:28.277 "impl_name": "posix", 00:20:28.277 "recv_buf_size": 2097152, 00:20:28.277 "send_buf_size": 2097152, 00:20:28.277 "enable_recv_pipe": true, 00:20:28.277 "enable_quickack": false, 00:20:28.277 "enable_placement_id": 0, 00:20:28.277 "enable_zerocopy_send_server": true, 00:20:28.277 "enable_zerocopy_send_client": false, 00:20:28.277 "zerocopy_threshold": 0, 00:20:28.277 "tls_version": 0, 00:20:28.277 "enable_ktls": false 00:20:28.277 } 00:20:28.277 } 00:20:28.277 ] 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "subsystem": "vmd", 00:20:28.277 "config": [] 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "subsystem": "accel", 00:20:28.277 "config": [ 00:20:28.277 { 00:20:28.277 "method": "accel_set_options", 00:20:28.277 "params": { 00:20:28.277 "small_cache_size": 128, 00:20:28.277 "large_cache_size": 16, 00:20:28.277 "task_count": 2048, 00:20:28.277 "sequence_count": 2048, 00:20:28.277 "buf_count": 2048 00:20:28.277 } 00:20:28.277 } 00:20:28.277 ] 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "subsystem": "bdev", 00:20:28.277 "config": [ 00:20:28.277 { 00:20:28.277 "method": "bdev_set_options", 00:20:28.277 "params": { 00:20:28.277 "bdev_io_pool_size": 65535, 00:20:28.277 "bdev_io_cache_size": 256, 00:20:28.277 "bdev_auto_examine": true, 00:20:28.277 "iobuf_small_cache_size": 128, 00:20:28.277 "iobuf_large_cache_size": 16 00:20:28.277 } 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "method": "bdev_raid_set_options", 00:20:28.277 "params": { 00:20:28.277 "process_window_size_kb": 1024, 00:20:28.277 "process_max_bandwidth_mb_sec": 0 00:20:28.277 } 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "method": "bdev_iscsi_set_options", 00:20:28.277 "params": { 00:20:28.277 "timeout_sec": 30 00:20:28.277 } 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "method": "bdev_nvme_set_options", 00:20:28.277 "params": { 00:20:28.277 "action_on_timeout": "none", 00:20:28.277 "timeout_us": 0, 00:20:28.277 "timeout_admin_us": 0, 00:20:28.277 "keep_alive_timeout_ms": 10000, 00:20:28.277 "arbitration_burst": 0, 00:20:28.277 "low_priority_weight": 0, 00:20:28.277 "medium_priority_weight": 0, 00:20:28.277 "high_priority_weight": 0, 00:20:28.277 "nvme_adminq_poll_period_us": 10000, 00:20:28.277 "nvme_ioq_poll_period_us": 0, 00:20:28.277 "io_queue_requests": 0, 00:20:28.277 "delay_cmd_submit": true, 00:20:28.277 "transport_retry_count": 4, 00:20:28.277 "bdev_retry_count": 3, 00:20:28.277 "transport_ack_timeout": 0, 00:20:28.277 "ctrlr_loss_timeout_sec": 0, 00:20:28.277 "reconnect_delay_sec": 0, 00:20:28.277 "fast_io_fail_timeout_sec": 0, 00:20:28.277 "disable_auto_failback": false, 00:20:28.277 "generate_uuids": false, 00:20:28.277 "transport_tos": 0, 00:20:28.277 "nvme_error_stat": false, 00:20:28.277 "rdma_srq_size": 0, 00:20:28.277 "io_path_stat": false, 00:20:28.277 "allow_accel_sequence": false, 00:20:28.277 "rdma_max_cq_size": 0, 00:20:28.277 "rdma_cm_event_timeout_ms": 0, 00:20:28.277 "dhchap_digests": [ 00:20:28.277 "sha256", 00:20:28.277 "sha384", 00:20:28.277 "sha512" 00:20:28.277 ], 00:20:28.277 "dhchap_dhgroups": [ 00:20:28.277 "null", 00:20:28.277 "ffdhe2048", 00:20:28.277 "ffdhe3072", 00:20:28.277 "ffdhe4096", 00:20:28.277 "ffdhe6144", 00:20:28.277 "ffdhe8192" 00:20:28.277 ] 00:20:28.277 } 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "method": "bdev_nvme_set_hotplug", 00:20:28.277 "params": { 00:20:28.277 "period_us": 100000, 00:20:28.277 "enable": false 00:20:28.277 } 00:20:28.277 }, 00:20:28.277 { 00:20:28.277 "method": "bdev_malloc_create", 00:20:28.277 "params": { 00:20:28.277 "name": "malloc0", 00:20:28.277 "num_blocks": 8192, 00:20:28.277 "block_size": 4096, 00:20:28.277 "physical_block_size": 4096, 00:20:28.277 "uuid": "932abb4f-3d2d-4593-ad47-44bbac1d0f84", 00:20:28.277 "optimal_io_boundary": 0, 00:20:28.277 "md_size": 0, 00:20:28.277 "dif_type": 0, 00:20:28.277 "dif_is_head_of_md": false, 00:20:28.277 "dif_pi_format": 0 00:20:28.277 } 00:20:28.277 }, 00:20:28.277 { 00:20:28.278 "method": "bdev_wait_for_examine" 00:20:28.278 } 00:20:28.278 ] 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "subsystem": "nbd", 00:20:28.278 "config": [] 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "subsystem": "scheduler", 00:20:28.278 "config": [ 00:20:28.278 { 00:20:28.278 "method": "framework_set_scheduler", 00:20:28.278 "params": { 00:20:28.278 "name": "static" 00:20:28.278 } 00:20:28.278 } 00:20:28.278 ] 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "subsystem": "nvmf", 00:20:28.278 "config": [ 00:20:28.278 { 00:20:28.278 "method": "nvmf_set_config", 00:20:28.278 "params": { 00:20:28.278 "discovery_filter": "match_any", 00:20:28.278 "admin_cmd_passthru": { 00:20:28.278 "identify_ctrlr": false 00:20:28.278 }, 00:20:28.278 "dhchap_digests": [ 00:20:28.278 "sha256", 00:20:28.278 "sha384", 00:20:28.278 "sha512" 00:20:28.278 ], 00:20:28.278 "dhchap_dhgroups": [ 00:20:28.278 "null", 00:20:28.278 "ffdhe2048", 00:20:28.278 "ffdhe3072", 00:20:28.278 "ffdhe4096", 00:20:28.278 "ffdhe6144", 00:20:28.278 "ffdhe8192" 00:20:28.278 ] 00:20:28.278 } 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "method": "nvmf_set_max_subsystems", 00:20:28.278 "params": { 00:20:28.278 "max_subsystems": 1024 00:20:28.278 } 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "method": "nvmf_set_crdt", 00:20:28.278 "params": { 00:20:28.278 "crdt1": 0, 00:20:28.278 "crdt2": 0, 00:20:28.278 "crdt3": 0 00:20:28.278 } 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "method": "nvmf_create_transport", 00:20:28.278 "params": { 00:20:28.278 "trtype": "TCP", 00:20:28.278 "max_queue_depth": 128, 00:20:28.278 "max_io_qpairs_per_ctrlr": 127, 00:20:28.278 "in_capsule_data_size": 4096, 00:20:28.278 "max_io_size": 131072, 00:20:28.278 "io_unit_size": 131072, 00:20:28.278 "max_aq_depth": 128, 00:20:28.278 "num_shared_buffers": 511, 00:20:28.278 "buf_cache_size": 4294967295, 00:20:28.278 "dif_insert_or_strip": false, 00:20:28.278 "zcopy": false, 00:20:28.278 "c2h_success": false, 00:20:28.278 "sock_priority": 0, 00:20:28.278 "abort_timeout_sec": 1, 00:20:28.278 "ack_timeout": 0, 00:20:28.278 "data_wr_pool_size": 0 00:20:28.278 } 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "method": "nvmf_create_subsystem", 00:20:28.278 "params": { 00:20:28.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.278 "allow_any_host": false, 00:20:28.278 "serial_number": "00000000000000000000", 00:20:28.278 "model_number": "SPDK bdev Controller", 00:20:28.278 "max_namespaces": 32, 00:20:28.278 "min_cntlid": 1, 00:20:28.278 "max_cntlid": 65519, 00:20:28.278 "ana_reporting": false 00:20:28.278 } 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "method": "nvmf_subsystem_add_host", 00:20:28.278 "params": { 00:20:28.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.278 "host": "nqn.2016-06.io.spdk:host1", 00:20:28.278 "psk": "key0" 00:20:28.278 } 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "method": "nvmf_subsystem_add_ns", 00:20:28.278 "params": { 00:20:28.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.278 "namespace": { 00:20:28.278 "nsid": 1, 00:20:28.278 "bdev_name": "malloc0", 00:20:28.278 "nguid": "932ABB4F3D2D4593AD4744BBAC1D0F84", 00:20:28.278 "uuid": "932abb4f-3d2d-4593-ad47-44bbac1d0f84", 00:20:28.278 "no_auto_visible": false 00:20:28.278 } 00:20:28.278 } 00:20:28.278 }, 00:20:28.278 { 00:20:28.278 "method": "nvmf_subsystem_add_listener", 00:20:28.278 "params": { 00:20:28.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.278 "listen_address": { 00:20:28.278 "trtype": "TCP", 00:20:28.278 "adrfam": "IPv4", 00:20:28.278 "traddr": "10.0.0.2", 00:20:28.278 "trsvcid": "4420" 00:20:28.278 }, 00:20:28.278 "secure_channel": false, 00:20:28.278 "sock_impl": "ssl" 00:20:28.278 } 00:20:28.278 } 00:20:28.278 ] 00:20:28.278 } 00:20:28.278 ] 00:20:28.278 }' 00:20:28.278 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:28.540 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:28.540 "subsystems": [ 00:20:28.540 { 00:20:28.540 "subsystem": "keyring", 00:20:28.540 "config": [ 00:20:28.540 { 00:20:28.540 "method": "keyring_file_add_key", 00:20:28.540 "params": { 00:20:28.540 "name": "key0", 00:20:28.540 "path": "/tmp/tmp.ao4i7Bfgcc" 00:20:28.540 } 00:20:28.540 } 00:20:28.540 ] 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "subsystem": "iobuf", 00:20:28.540 "config": [ 00:20:28.540 { 00:20:28.540 "method": "iobuf_set_options", 00:20:28.540 "params": { 00:20:28.540 "small_pool_count": 8192, 00:20:28.540 "large_pool_count": 1024, 00:20:28.540 "small_bufsize": 8192, 00:20:28.540 "large_bufsize": 135168 00:20:28.540 } 00:20:28.540 } 00:20:28.540 ] 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "subsystem": "sock", 00:20:28.540 "config": [ 00:20:28.540 { 00:20:28.540 "method": "sock_set_default_impl", 00:20:28.540 "params": { 00:20:28.540 "impl_name": "posix" 00:20:28.540 } 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "method": "sock_impl_set_options", 00:20:28.540 "params": { 00:20:28.540 "impl_name": "ssl", 00:20:28.540 "recv_buf_size": 4096, 00:20:28.540 "send_buf_size": 4096, 00:20:28.540 "enable_recv_pipe": true, 00:20:28.540 "enable_quickack": false, 00:20:28.540 "enable_placement_id": 0, 00:20:28.540 "enable_zerocopy_send_server": true, 00:20:28.540 "enable_zerocopy_send_client": false, 00:20:28.540 "zerocopy_threshold": 0, 00:20:28.540 "tls_version": 0, 00:20:28.540 "enable_ktls": false 00:20:28.540 } 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "method": "sock_impl_set_options", 00:20:28.540 "params": { 00:20:28.540 "impl_name": "posix", 00:20:28.540 "recv_buf_size": 2097152, 00:20:28.540 "send_buf_size": 2097152, 00:20:28.540 "enable_recv_pipe": true, 00:20:28.540 "enable_quickack": false, 00:20:28.540 "enable_placement_id": 0, 00:20:28.540 "enable_zerocopy_send_server": true, 00:20:28.540 "enable_zerocopy_send_client": false, 00:20:28.540 "zerocopy_threshold": 0, 00:20:28.540 "tls_version": 0, 00:20:28.540 "enable_ktls": false 00:20:28.540 } 00:20:28.540 } 00:20:28.540 ] 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "subsystem": "vmd", 00:20:28.540 "config": [] 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "subsystem": "accel", 00:20:28.540 "config": [ 00:20:28.540 { 00:20:28.540 "method": "accel_set_options", 00:20:28.540 "params": { 00:20:28.540 "small_cache_size": 128, 00:20:28.540 "large_cache_size": 16, 00:20:28.540 "task_count": 2048, 00:20:28.540 "sequence_count": 2048, 00:20:28.540 "buf_count": 2048 00:20:28.540 } 00:20:28.540 } 00:20:28.540 ] 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "subsystem": "bdev", 00:20:28.540 "config": [ 00:20:28.540 { 00:20:28.540 "method": "bdev_set_options", 00:20:28.540 "params": { 00:20:28.540 "bdev_io_pool_size": 65535, 00:20:28.540 "bdev_io_cache_size": 256, 00:20:28.540 "bdev_auto_examine": true, 00:20:28.540 "iobuf_small_cache_size": 128, 00:20:28.540 "iobuf_large_cache_size": 16 00:20:28.540 } 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "method": "bdev_raid_set_options", 00:20:28.540 "params": { 00:20:28.540 "process_window_size_kb": 1024, 00:20:28.540 "process_max_bandwidth_mb_sec": 0 00:20:28.540 } 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "method": "bdev_iscsi_set_options", 00:20:28.540 "params": { 00:20:28.540 "timeout_sec": 30 00:20:28.540 } 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "method": "bdev_nvme_set_options", 00:20:28.540 "params": { 00:20:28.540 "action_on_timeout": "none", 00:20:28.540 "timeout_us": 0, 00:20:28.540 "timeout_admin_us": 0, 00:20:28.540 "keep_alive_timeout_ms": 10000, 00:20:28.540 "arbitration_burst": 0, 00:20:28.540 "low_priority_weight": 0, 00:20:28.540 "medium_priority_weight": 0, 00:20:28.540 "high_priority_weight": 0, 00:20:28.540 "nvme_adminq_poll_period_us": 10000, 00:20:28.540 "nvme_ioq_poll_period_us": 0, 00:20:28.540 "io_queue_requests": 512, 00:20:28.540 "delay_cmd_submit": true, 00:20:28.540 "transport_retry_count": 4, 00:20:28.540 "bdev_retry_count": 3, 00:20:28.540 "transport_ack_timeout": 0, 00:20:28.540 "ctrlr_loss_timeout_sec": 0, 00:20:28.540 "reconnect_delay_sec": 0, 00:20:28.540 "fast_io_fail_timeout_sec": 0, 00:20:28.540 "disable_auto_failback": false, 00:20:28.540 "generate_uuids": false, 00:20:28.540 "transport_tos": 0, 00:20:28.540 "nvme_error_stat": false, 00:20:28.540 "rdma_srq_size": 0, 00:20:28.540 "io_path_stat": false, 00:20:28.540 "allow_accel_sequence": false, 00:20:28.540 "rdma_max_cq_size": 0, 00:20:28.540 "rdma_cm_event_timeout_ms": 0, 00:20:28.540 "dhchap_digests": [ 00:20:28.540 "sha256", 00:20:28.540 "sha384", 00:20:28.540 "sha512" 00:20:28.540 ], 00:20:28.540 "dhchap_dhgroups": [ 00:20:28.540 "null", 00:20:28.540 "ffdhe2048", 00:20:28.540 "ffdhe3072", 00:20:28.540 "ffdhe4096", 00:20:28.540 "ffdhe6144", 00:20:28.540 "ffdhe8192" 00:20:28.540 ] 00:20:28.540 } 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "method": "bdev_nvme_attach_controller", 00:20:28.540 "params": { 00:20:28.540 "name": "nvme0", 00:20:28.540 "trtype": "TCP", 00:20:28.540 "adrfam": "IPv4", 00:20:28.540 "traddr": "10.0.0.2", 00:20:28.540 "trsvcid": "4420", 00:20:28.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.540 "prchk_reftag": false, 00:20:28.540 "prchk_guard": false, 00:20:28.540 "ctrlr_loss_timeout_sec": 0, 00:20:28.540 "reconnect_delay_sec": 0, 00:20:28.540 "fast_io_fail_timeout_sec": 0, 00:20:28.540 "psk": "key0", 00:20:28.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.540 "hdgst": false, 00:20:28.540 "ddgst": false, 00:20:28.540 "multipath": "multipath" 00:20:28.540 } 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "method": "bdev_nvme_set_hotplug", 00:20:28.540 "params": { 00:20:28.540 "period_us": 100000, 00:20:28.540 "enable": false 00:20:28.540 } 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "method": "bdev_enable_histogram", 00:20:28.540 "params": { 00:20:28.540 "name": "nvme0n1", 00:20:28.540 "enable": true 00:20:28.540 } 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "method": "bdev_wait_for_examine" 00:20:28.540 } 00:20:28.540 ] 00:20:28.540 }, 00:20:28.540 { 00:20:28.540 "subsystem": "nbd", 00:20:28.540 "config": [] 00:20:28.540 } 00:20:28.540 ] 00:20:28.540 }' 00:20:28.540 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3153025 00:20:28.540 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3153025 ']' 00:20:28.540 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3153025 00:20:28.540 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:28.540 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:28.541 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3153025 00:20:28.541 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:28.541 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:28.541 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3153025' 00:20:28.541 killing process with pid 3153025 00:20:28.541 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3153025 00:20:28.541 Received shutdown signal, test time was about 1.000000 seconds 00:20:28.541 00:20:28.541 Latency(us) 00:20:28.541 [2024-10-16T05:03:28.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.541 [2024-10-16T05:03:28.040Z] =================================================================================================================== 00:20:28.541 [2024-10-16T05:03:28.040Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.541 07:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3153025 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3152681 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3152681 ']' 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3152681 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3152681 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3152681' 00:20:28.801 killing process with pid 3152681 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3152681 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3152681 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:28.801 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:28.801 "subsystems": [ 00:20:28.801 { 00:20:28.801 "subsystem": "keyring", 00:20:28.801 "config": [ 00:20:28.801 { 00:20:28.801 "method": "keyring_file_add_key", 00:20:28.801 "params": { 00:20:28.801 "name": "key0", 00:20:28.801 "path": "/tmp/tmp.ao4i7Bfgcc" 00:20:28.801 } 00:20:28.802 } 00:20:28.802 ] 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "subsystem": "iobuf", 00:20:28.802 "config": [ 00:20:28.802 { 00:20:28.802 "method": "iobuf_set_options", 00:20:28.802 "params": { 00:20:28.802 "small_pool_count": 8192, 00:20:28.802 "large_pool_count": 1024, 00:20:28.802 "small_bufsize": 8192, 00:20:28.802 "large_bufsize": 135168 00:20:28.802 } 00:20:28.802 } 00:20:28.802 ] 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "subsystem": "sock", 00:20:28.802 "config": [ 00:20:28.802 { 00:20:28.802 "method": "sock_set_default_impl", 00:20:28.802 "params": { 00:20:28.802 "impl_name": "posix" 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "sock_impl_set_options", 00:20:28.802 "params": { 00:20:28.802 "impl_name": "ssl", 00:20:28.802 "recv_buf_size": 4096, 00:20:28.802 "send_buf_size": 4096, 00:20:28.802 "enable_recv_pipe": true, 00:20:28.802 "enable_quickack": false, 00:20:28.802 "enable_placement_id": 0, 00:20:28.802 "enable_zerocopy_send_server": true, 00:20:28.802 "enable_zerocopy_send_client": false, 00:20:28.802 "zerocopy_threshold": 0, 00:20:28.802 "tls_version": 0, 00:20:28.802 "enable_ktls": false 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "sock_impl_set_options", 00:20:28.802 "params": { 00:20:28.802 "impl_name": "posix", 00:20:28.802 "recv_buf_size": 2097152, 00:20:28.802 "send_buf_size": 2097152, 00:20:28.802 "enable_recv_pipe": true, 00:20:28.802 "enable_quickack": false, 00:20:28.802 "enable_placement_id": 0, 00:20:28.802 "enable_zerocopy_send_server": true, 00:20:28.802 "enable_zerocopy_send_client": false, 00:20:28.802 "zerocopy_threshold": 0, 00:20:28.802 "tls_version": 0, 00:20:28.802 "enable_ktls": false 00:20:28.802 } 00:20:28.802 } 00:20:28.802 ] 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "subsystem": "vmd", 00:20:28.802 "config": [] 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "subsystem": "accel", 00:20:28.802 "config": [ 00:20:28.802 { 00:20:28.802 "method": "accel_set_options", 00:20:28.802 "params": { 00:20:28.802 "small_cache_size": 128, 00:20:28.802 "large_cache_size": 16, 00:20:28.802 "task_count": 2048, 00:20:28.802 "sequence_count": 2048, 00:20:28.802 "buf_count": 2048 00:20:28.802 } 00:20:28.802 } 00:20:28.802 ] 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "subsystem": "bdev", 00:20:28.802 "config": [ 00:20:28.802 { 00:20:28.802 "method": "bdev_set_options", 00:20:28.802 "params": { 00:20:28.802 "bdev_io_pool_size": 65535, 00:20:28.802 "bdev_io_cache_size": 256, 00:20:28.802 "bdev_auto_examine": true, 00:20:28.802 "iobuf_small_cache_size": 128, 00:20:28.802 "iobuf_large_cache_size": 16 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "bdev_raid_set_options", 00:20:28.802 "params": { 00:20:28.802 "process_window_size_kb": 1024, 00:20:28.802 "process_max_bandwidth_mb_sec": 0 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "bdev_iscsi_set_options", 00:20:28.802 "params": { 00:20:28.802 "timeout_sec": 30 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "bdev_nvme_set_options", 00:20:28.802 "params": { 00:20:28.802 "action_on_timeout": "none", 00:20:28.802 "timeout_us": 0, 00:20:28.802 "timeout_admin_us": 0, 00:20:28.802 "keep_alive_timeout_ms": 10000, 00:20:28.802 "arbitration_burst": 0, 00:20:28.802 "low_priority_weight": 0, 00:20:28.802 "medium_priority_weight": 0, 00:20:28.802 "high_priority_weight": 0, 00:20:28.802 "nvme_adminq_poll_period_us": 10000, 00:20:28.802 "nvme_ioq_poll_period_us": 0, 00:20:28.802 "io_queue_requests": 0, 00:20:28.802 "delay_cmd_submit": true, 00:20:28.802 "transport_retry_count": 4, 00:20:28.802 "bdev_retry_count": 3, 00:20:28.802 "transport_ack_timeout": 0, 00:20:28.802 "ctrlr_loss_timeout_sec": 0, 00:20:28.802 "reconnect_delay_sec": 0, 00:20:28.802 "fast_io_fail_timeout_sec": 0, 00:20:28.802 "disable_auto_failback": false, 00:20:28.802 "generate_uuids": false, 00:20:28.802 "transport_tos": 0, 00:20:28.802 "nvme_error_stat": false, 00:20:28.802 "rdma_srq_size": 0, 00:20:28.802 "io_path_stat": false, 00:20:28.802 "allow_accel_sequence": false, 00:20:28.802 "rdma_max_cq_size": 0, 00:20:28.802 "rdma_cm_event_timeout_ms": 0, 00:20:28.802 "dhchap_digests": [ 00:20:28.802 "sha256", 00:20:28.802 "sha384", 00:20:28.802 "sha512" 00:20:28.802 ], 00:20:28.802 "dhchap_dhgroups": [ 00:20:28.802 "null", 00:20:28.802 "ffdhe2048", 00:20:28.802 "ffdhe3072", 00:20:28.802 "ffdhe4096", 00:20:28.802 "ffdhe6144", 00:20:28.802 "ffdhe8192" 00:20:28.802 ] 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "bdev_nvme_set_hotplug", 00:20:28.802 "params": { 00:20:28.802 "period_us": 100000, 00:20:28.802 "enable": false 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "bdev_malloc_create", 00:20:28.802 "params": { 00:20:28.802 "name": "malloc0", 00:20:28.802 "num_blocks": 8192, 00:20:28.802 "block_size": 4096, 00:20:28.802 "physical_block_size": 4096, 00:20:28.802 "uuid": "932abb4f-3d2d-4593-ad47-44bbac1d0f84", 00:20:28.802 "optimal_io_boundary": 0, 00:20:28.802 "md_size": 0, 00:20:28.802 "dif_type": 0, 00:20:28.802 "dif_is_head_of_md": false, 00:20:28.802 "dif_pi_format": 0 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "bdev_wait_for_examine" 00:20:28.802 } 00:20:28.802 ] 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "subsystem": "nbd", 00:20:28.802 "config": [] 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "subsystem": "scheduler", 00:20:28.802 "config": [ 00:20:28.802 { 00:20:28.802 "method": "framework_set_scheduler", 00:20:28.802 "params": { 00:20:28.802 "name": "static" 00:20:28.802 } 00:20:28.802 } 00:20:28.802 ] 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "subsystem": "nvmf", 00:20:28.802 "config": [ 00:20:28.802 { 00:20:28.802 "method": "nvmf_set_config", 00:20:28.802 "params": { 00:20:28.802 "discovery_filter": "match_any", 00:20:28.802 "admin_cmd_passthru": { 00:20:28.802 "identify_ctrlr": false 00:20:28.802 }, 00:20:28.802 "dhchap_digests": [ 00:20:28.802 "sha256", 00:20:28.802 "sha384", 00:20:28.802 "sha512" 00:20:28.802 ], 00:20:28.802 "dhchap_dhgroups": [ 00:20:28.802 "null", 00:20:28.802 "ffdhe2048", 00:20:28.802 "ffdhe3072", 00:20:28.802 "ffdhe4096", 00:20:28.802 "ffdhe6144", 00:20:28.802 "ffdhe8192" 00:20:28.802 ] 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "nvmf_set_max_subsystems", 00:20:28.802 "params": { 00:20:28.802 "max_subsystems": 1024 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "nvmf_set_crdt", 00:20:28.802 "params": { 00:20:28.802 "crdt1": 0, 00:20:28.802 "crdt2": 0, 00:20:28.802 "crdt3": 0 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "nvmf_create_transport", 00:20:28.802 "params": { 00:20:28.802 "trtype": "TCP", 00:20:28.802 "max_queue_depth": 128, 00:20:28.802 "max_io_qpairs_per_ctrlr": 127, 00:20:28.802 "in_capsule_data_size": 4096, 00:20:28.802 "max_io_size": 131072, 00:20:28.802 "io_unit_size": 131072, 00:20:28.802 "max_aq_depth": 128, 00:20:28.802 "num_shared_buffers": 511, 00:20:28.802 "buf_cache_size": 4294967295, 00:20:28.802 "dif_insert_or_strip": false, 00:20:28.802 "zcopy": false, 00:20:28.802 "c2h_success": false, 00:20:28.802 "sock_priority": 0, 00:20:28.802 "abort_timeout_sec": 1, 00:20:28.802 "ack_timeout": 0, 00:20:28.802 "data_wr_pool_size": 0 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "nvmf_create_subsystem", 00:20:28.802 "params": { 00:20:28.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.802 "allow_any_host": false, 00:20:28.802 "serial_number": "00000000000000000000", 00:20:28.802 "model_number": "SPDK bdev Controller", 00:20:28.802 "max_namespaces": 32, 00:20:28.802 "min_cntlid": 1, 00:20:28.802 "max_cntlid": 65519, 00:20:28.802 "ana_reporting": false 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "nvmf_subsystem_add_host", 00:20:28.802 "params": { 00:20:28.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.802 "host": "nqn.2016-06.io.spdk:host1", 00:20:28.802 "psk": "key0" 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "nvmf_subsystem_add_ns", 00:20:28.802 "params": { 00:20:28.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.802 "namespace": { 00:20:28.802 "nsid": 1, 00:20:28.802 "bdev_name": "malloc0", 00:20:28.802 "nguid": "932ABB4F3D2D4593AD4744BBAC1D0F84", 00:20:28.802 "uuid": "932abb4f-3d2d-4593-ad47-44bbac1d0f84", 00:20:28.802 "no_auto_visible": false 00:20:28.802 } 00:20:28.802 } 00:20:28.802 }, 00:20:28.802 { 00:20:28.802 "method": "nvmf_subsystem_add_listener", 00:20:28.802 "params": { 00:20:28.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.802 "listen_address": { 00:20:28.802 "trtype": "TCP", 00:20:28.802 "adrfam": "IPv4", 00:20:28.802 "traddr": "10.0.0.2", 00:20:28.802 "trsvcid": "4420" 00:20:28.802 }, 00:20:28.802 "secure_channel": false, 00:20:28.802 "sock_impl": "ssl" 00:20:28.802 } 00:20:28.802 } 00:20:28.802 ] 00:20:28.802 } 00:20:28.802 ] 00:20:28.802 }' 00:20:28.802 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.802 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3153631 00:20:28.803 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3153631 00:20:28.803 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:28.803 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3153631 ']' 00:20:28.803 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.803 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.803 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.803 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.803 07:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.063 [2024-10-16 07:03:28.317610] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:29.063 [2024-10-16 07:03:28.317672] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.063 [2024-10-16 07:03:28.399482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.063 [2024-10-16 07:03:28.430242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.063 [2024-10-16 07:03:28.430272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.063 [2024-10-16 07:03:28.430278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.063 [2024-10-16 07:03:28.430283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.063 [2024-10-16 07:03:28.430287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.063 [2024-10-16 07:03:28.430780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.323 [2024-10-16 07:03:28.623740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.323 [2024-10-16 07:03:28.655772] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.323 [2024-10-16 07:03:28.655976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3153742 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3153742 /var/tmp/bdevperf.sock 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3153742 ']' 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.893 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:29.893 "subsystems": [ 00:20:29.893 { 00:20:29.893 "subsystem": "keyring", 00:20:29.894 "config": [ 00:20:29.894 { 00:20:29.894 "method": "keyring_file_add_key", 00:20:29.894 "params": { 00:20:29.894 "name": "key0", 00:20:29.894 "path": "/tmp/tmp.ao4i7Bfgcc" 00:20:29.894 } 00:20:29.894 } 00:20:29.894 ] 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "subsystem": "iobuf", 00:20:29.894 "config": [ 00:20:29.894 { 00:20:29.894 "method": "iobuf_set_options", 00:20:29.894 "params": { 00:20:29.894 "small_pool_count": 8192, 00:20:29.894 "large_pool_count": 1024, 00:20:29.894 "small_bufsize": 8192, 00:20:29.894 "large_bufsize": 135168 00:20:29.894 } 00:20:29.894 } 00:20:29.894 ] 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "subsystem": "sock", 00:20:29.894 "config": [ 00:20:29.894 { 00:20:29.894 "method": "sock_set_default_impl", 00:20:29.894 "params": { 00:20:29.894 "impl_name": "posix" 00:20:29.894 } 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "method": "sock_impl_set_options", 00:20:29.894 "params": { 00:20:29.894 "impl_name": "ssl", 00:20:29.894 "recv_buf_size": 4096, 00:20:29.894 "send_buf_size": 4096, 00:20:29.894 "enable_recv_pipe": true, 00:20:29.894 "enable_quickack": false, 00:20:29.894 "enable_placement_id": 0, 00:20:29.894 "enable_zerocopy_send_server": true, 00:20:29.894 "enable_zerocopy_send_client": false, 00:20:29.894 "zerocopy_threshold": 0, 00:20:29.894 "tls_version": 0, 00:20:29.894 "enable_ktls": false 00:20:29.894 } 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "method": "sock_impl_set_options", 00:20:29.894 "params": { 00:20:29.894 "impl_name": "posix", 00:20:29.894 "recv_buf_size": 2097152, 00:20:29.894 "send_buf_size": 2097152, 00:20:29.894 "enable_recv_pipe": true, 00:20:29.894 "enable_quickack": false, 00:20:29.894 "enable_placement_id": 0, 00:20:29.894 "enable_zerocopy_send_server": true, 00:20:29.894 "enable_zerocopy_send_client": false, 00:20:29.894 "zerocopy_threshold": 0, 00:20:29.894 "tls_version": 0, 00:20:29.894 "enable_ktls": false 00:20:29.894 } 00:20:29.894 } 00:20:29.894 ] 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "subsystem": "vmd", 00:20:29.894 "config": [] 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "subsystem": "accel", 00:20:29.894 "config": [ 00:20:29.894 { 00:20:29.894 "method": "accel_set_options", 00:20:29.894 "params": { 00:20:29.894 "small_cache_size": 128, 00:20:29.894 "large_cache_size": 16, 00:20:29.894 "task_count": 2048, 00:20:29.894 "sequence_count": 2048, 00:20:29.894 "buf_count": 2048 00:20:29.894 } 00:20:29.894 } 00:20:29.894 ] 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "subsystem": "bdev", 00:20:29.894 "config": [ 00:20:29.894 { 00:20:29.894 "method": "bdev_set_options", 00:20:29.894 "params": { 00:20:29.894 "bdev_io_pool_size": 65535, 00:20:29.894 "bdev_io_cache_size": 256, 00:20:29.894 "bdev_auto_examine": true, 00:20:29.894 "iobuf_small_cache_size": 128, 00:20:29.894 "iobuf_large_cache_size": 16 00:20:29.894 } 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "method": "bdev_raid_set_options", 00:20:29.894 "params": { 00:20:29.894 "process_window_size_kb": 1024, 00:20:29.894 "process_max_bandwidth_mb_sec": 0 00:20:29.894 } 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "method": "bdev_iscsi_set_options", 00:20:29.894 "params": { 00:20:29.894 "timeout_sec": 30 00:20:29.894 } 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "method": "bdev_nvme_set_options", 00:20:29.894 "params": { 00:20:29.894 "action_on_timeout": "none", 00:20:29.894 "timeout_us": 0, 00:20:29.894 "timeout_admin_us": 0, 00:20:29.894 "keep_alive_timeout_ms": 10000, 00:20:29.894 "arbitration_burst": 0, 00:20:29.894 "low_priority_weight": 0, 00:20:29.894 "medium_priority_weight": 0, 00:20:29.894 "high_priority_weight": 0, 00:20:29.894 "nvme_adminq_poll_period_us": 10000, 00:20:29.894 "nvme_ioq_poll_period_us": 0, 00:20:29.894 "io_queue_requests": 512, 00:20:29.894 "delay_cmd_submit": true, 00:20:29.894 "transport_retry_count": 4, 00:20:29.894 "bdev_retry_count": 3, 00:20:29.894 "transport_ack_timeout": 0, 00:20:29.894 "ctrlr_loss_timeout_sec": 0, 00:20:29.894 "reconnect_delay_sec": 0, 00:20:29.894 "fast_io_fail_timeout_sec": 0, 00:20:29.894 "disable_auto_failback": false, 00:20:29.894 "generate_uuids": false, 00:20:29.894 "transport_tos": 0, 00:20:29.894 "nvme_error_stat": false, 00:20:29.894 "rdma_srq_size": 0, 00:20:29.894 "io_path_stat": false, 00:20:29.894 "allow_accel_sequence": false, 00:20:29.894 "rdma_max_cq_size": 0, 00:20:29.894 "rdma_cm_event_timeout_ms": 0, 00:20:29.894 "dhchap_digests": [ 00:20:29.894 "sha256", 00:20:29.894 "sha384", 00:20:29.894 "sha512" 00:20:29.894 ], 00:20:29.894 "dhchap_dhgroups": [ 00:20:29.894 "null", 00:20:29.894 "ffdhe2048", 00:20:29.894 "ffdhe3072", 00:20:29.894 "ffdhe4096", 00:20:29.894 "ffdhe6144", 00:20:29.894 "ffdhe8192" 00:20:29.894 ] 00:20:29.894 } 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "method": "bdev_nvme_attach_controller", 00:20:29.894 "params": { 00:20:29.894 "name": "nvme0", 00:20:29.894 "trtype": "TCP", 00:20:29.894 "adrfam": "IPv4", 00:20:29.894 "traddr": "10.0.0.2", 00:20:29.894 "trsvcid": "4420", 00:20:29.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.894 "prchk_reftag": false, 00:20:29.894 "prchk_guard": false, 00:20:29.894 "ctrlr_loss_timeout_sec": 0, 00:20:29.894 "reconnect_delay_sec": 0, 00:20:29.894 "fast_io_fail_timeout_sec": 0, 00:20:29.894 "psk": "key0", 00:20:29.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.894 "hdgst": false, 00:20:29.894 "ddgst": false, 00:20:29.894 "multipath": "multipath" 00:20:29.894 } 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "method": "bdev_nvme_set_hotplug", 00:20:29.894 "params": { 00:20:29.894 "period_us": 100000, 00:20:29.894 "enable": false 00:20:29.894 } 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "method": "bdev_enable_histogram", 00:20:29.894 "params": { 00:20:29.894 "name": "nvme0n1", 00:20:29.894 "enable": true 00:20:29.894 } 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "method": "bdev_wait_for_examine" 00:20:29.894 } 00:20:29.894 ] 00:20:29.894 }, 00:20:29.894 { 00:20:29.894 "subsystem": "nbd", 00:20:29.894 "config": [] 00:20:29.894 } 00:20:29.894 ] 00:20:29.894 }' 00:20:29.894 [2024-10-16 07:03:29.182401] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:29.894 [2024-10-16 07:03:29.182455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153742 ] 00:20:29.894 [2024-10-16 07:03:29.258449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.894 [2024-10-16 07:03:29.288083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.154 [2024-10-16 07:03:29.422504] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.725 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.725 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:30.725 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:30.725 07:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:30.725 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.725 07:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.985 Running I/O for 1 seconds... 00:20:31.926 5401.00 IOPS, 21.10 MiB/s 00:20:31.926 Latency(us) 00:20:31.926 [2024-10-16T05:03:31.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.926 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:31.926 Verification LBA range: start 0x0 length 0x2000 00:20:31.926 nvme0n1 : 1.01 5464.57 21.35 0.00 0.00 23280.36 4614.83 28180.48 00:20:31.926 [2024-10-16T05:03:31.425Z] =================================================================================================================== 00:20:31.926 [2024-10-16T05:03:31.425Z] Total : 5464.57 21.35 0.00 0.00 23280.36 4614.83 28180.48 00:20:31.926 { 00:20:31.926 "results": [ 00:20:31.926 { 00:20:31.926 "job": "nvme0n1", 00:20:31.926 "core_mask": "0x2", 00:20:31.926 "workload": "verify", 00:20:31.926 "status": "finished", 00:20:31.926 "verify_range": { 00:20:31.926 "start": 0, 00:20:31.926 "length": 8192 00:20:31.926 }, 00:20:31.926 "queue_depth": 128, 00:20:31.926 "io_size": 4096, 00:20:31.926 "runtime": 1.01179, 00:20:31.926 "iops": 5464.572688008381, 00:20:31.926 "mibps": 21.34598706253274, 00:20:31.926 "io_failed": 0, 00:20:31.926 "io_timeout": 0, 00:20:31.926 "avg_latency_us": 23280.35570989329, 00:20:31.926 "min_latency_us": 4614.826666666667, 00:20:31.926 "max_latency_us": 28180.48 00:20:31.926 } 00:20:31.926 ], 00:20:31.926 "core_count": 1 00:20:31.926 } 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:31.926 nvmf_trace.0 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3153742 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3153742 ']' 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3153742 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:31.926 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3153742 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3153742' 00:20:32.186 killing process with pid 3153742 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3153742 00:20:32.186 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.186 00:20:32.186 Latency(us) 00:20:32.186 [2024-10-16T05:03:31.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.186 [2024-10-16T05:03:31.685Z] =================================================================================================================== 00:20:32.186 [2024-10-16T05:03:31.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3153742 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:32.186 rmmod nvme_tcp 00:20:32.186 rmmod nvme_fabrics 00:20:32.186 rmmod nvme_keyring 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 3153631 ']' 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 3153631 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3153631 ']' 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3153631 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.186 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3153631 00:20:32.446 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.446 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3153631' 00:20:32.447 killing process with pid 3153631 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3153631 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3153631 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.447 07:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.051 07:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.051 07:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.44hjdLhSn8 /tmp/tmp.u4xrxuUXMO /tmp/tmp.ao4i7Bfgcc 00:20:35.051 00:20:35.051 real 1m26.705s 00:20:35.051 user 2m16.664s 00:20:35.051 sys 0m26.710s 00:20:35.051 07:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:35.051 07:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.051 ************************************ 00:20:35.051 END TEST nvmf_tls 00:20:35.051 ************************************ 00:20:35.051 07:03:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:35.051 07:03:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:35.051 07:03:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:35.051 07:03:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:35.051 ************************************ 00:20:35.051 START TEST nvmf_fips 00:20:35.051 ************************************ 00:20:35.051 07:03:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:35.051 * Looking for test storage... 00:20:35.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:35.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.051 --rc genhtml_branch_coverage=1 00:20:35.051 --rc genhtml_function_coverage=1 00:20:35.051 --rc genhtml_legend=1 00:20:35.051 --rc geninfo_all_blocks=1 00:20:35.051 --rc geninfo_unexecuted_blocks=1 00:20:35.051 00:20:35.051 ' 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:35.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.051 --rc genhtml_branch_coverage=1 00:20:35.051 --rc genhtml_function_coverage=1 00:20:35.051 --rc genhtml_legend=1 00:20:35.051 --rc geninfo_all_blocks=1 00:20:35.051 --rc geninfo_unexecuted_blocks=1 00:20:35.051 00:20:35.051 ' 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:35.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.051 --rc genhtml_branch_coverage=1 00:20:35.051 --rc genhtml_function_coverage=1 00:20:35.051 --rc genhtml_legend=1 00:20:35.051 --rc geninfo_all_blocks=1 00:20:35.051 --rc geninfo_unexecuted_blocks=1 00:20:35.051 00:20:35.051 ' 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:35.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.051 --rc genhtml_branch_coverage=1 00:20:35.051 --rc genhtml_function_coverage=1 00:20:35.051 --rc genhtml_legend=1 00:20:35.051 --rc geninfo_all_blocks=1 00:20:35.051 --rc geninfo_unexecuted_blocks=1 00:20:35.051 00:20:35.051 ' 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.051 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:35.052 Error setting digest 00:20:35.052 40B2DE788B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:35.052 40B2DE788B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:35.052 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.053 07:03:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:43.294 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:43.294 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:43.294 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:43.294 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.294 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:20:43.295 00:20:43.295 --- 10.0.0.2 ping statistics --- 00:20:43.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.295 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:20:43.295 00:20:43.295 --- 10.0.0.1 ping statistics --- 00:20:43.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.295 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=3158462 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 3158462 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3158462 ']' 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.295 07:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:43.295 [2024-10-16 07:03:42.048623] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:43.295 [2024-10-16 07:03:42.048701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.295 [2024-10-16 07:03:42.139439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.295 [2024-10-16 07:03:42.189432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.295 [2024-10-16 07:03:42.189484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.295 [2024-10-16 07:03:42.189494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.295 [2024-10-16 07:03:42.189501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.295 [2024-10-16 07:03:42.189507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.295 [2024-10-16 07:03:42.190324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.WNw 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.WNw 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.WNw 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.WNw 00:20:43.556 07:03:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.817 [2024-10-16 07:03:43.064309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.817 [2024-10-16 07:03:43.080302] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.817 [2024-10-16 07:03:43.080596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.817 malloc0 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3158807 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3158807 /var/tmp/bdevperf.sock 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3158807 ']' 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.817 07:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:43.817 [2024-10-16 07:03:43.225541] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:20:43.817 [2024-10-16 07:03:43.225623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158807 ] 00:20:43.817 [2024-10-16 07:03:43.310454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.078 [2024-10-16 07:03:43.361520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.649 07:03:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.649 07:03:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:44.649 07:03:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.WNw 00:20:44.910 07:03:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:44.910 [2024-10-16 07:03:44.406532] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.170 TLSTESTn1 00:20:45.170 07:03:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:45.170 Running I/O for 10 seconds... 00:20:47.493 2738.00 IOPS, 10.70 MiB/s [2024-10-16T05:03:47.932Z] 3551.50 IOPS, 13.87 MiB/s [2024-10-16T05:03:48.872Z] 4255.00 IOPS, 16.62 MiB/s [2024-10-16T05:03:49.814Z] 4744.25 IOPS, 18.53 MiB/s [2024-10-16T05:03:50.754Z] 4809.60 IOPS, 18.79 MiB/s [2024-10-16T05:03:51.694Z] 4965.00 IOPS, 19.39 MiB/s [2024-10-16T05:03:52.636Z] 5023.43 IOPS, 19.62 MiB/s [2024-10-16T05:03:54.020Z] 5166.00 IOPS, 20.18 MiB/s [2024-10-16T05:03:54.962Z] 5170.44 IOPS, 20.20 MiB/s [2024-10-16T05:03:54.962Z] 5256.00 IOPS, 20.53 MiB/s 00:20:55.463 Latency(us) 00:20:55.463 [2024-10-16T05:03:54.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.463 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:55.463 Verification LBA range: start 0x0 length 0x2000 00:20:55.463 TLSTESTn1 : 10.01 5262.87 20.56 0.00 0.00 24288.47 4450.99 86944.43 00:20:55.463 [2024-10-16T05:03:54.962Z] =================================================================================================================== 00:20:55.463 [2024-10-16T05:03:54.962Z] Total : 5262.87 20.56 0.00 0.00 24288.47 4450.99 86944.43 00:20:55.463 { 00:20:55.463 "results": [ 00:20:55.463 { 00:20:55.463 "job": "TLSTESTn1", 00:20:55.463 "core_mask": "0x4", 00:20:55.463 "workload": "verify", 00:20:55.463 "status": "finished", 00:20:55.463 "verify_range": { 00:20:55.463 "start": 0, 00:20:55.463 "length": 8192 00:20:55.463 }, 00:20:55.463 "queue_depth": 128, 00:20:55.463 "io_size": 4096, 00:20:55.463 "runtime": 10.01108, 00:20:55.463 "iops": 5262.86874143449, 00:20:55.463 "mibps": 20.558081021228478, 00:20:55.463 "io_failed": 0, 00:20:55.463 "io_timeout": 0, 00:20:55.463 "avg_latency_us": 24288.465262398695, 00:20:55.463 "min_latency_us": 4450.986666666667, 00:20:55.463 "max_latency_us": 86944.42666666667 00:20:55.463 } 00:20:55.463 ], 00:20:55.463 "core_count": 1 00:20:55.463 } 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:55.463 nvmf_trace.0 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3158807 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3158807 ']' 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3158807 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3158807 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3158807' 00:20:55.463 killing process with pid 3158807 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3158807 00:20:55.463 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.463 00:20:55.463 Latency(us) 00:20:55.463 [2024-10-16T05:03:54.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.463 [2024-10-16T05:03:54.962Z] =================================================================================================================== 00:20:55.463 [2024-10-16T05:03:54.962Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3158807 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.463 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.463 rmmod nvme_tcp 00:20:55.463 rmmod nvme_fabrics 00:20:55.724 rmmod nvme_keyring 00:20:55.724 07:03:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 3158462 ']' 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 3158462 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3158462 ']' 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3158462 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3158462 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3158462' 00:20:55.724 killing process with pid 3158462 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3158462 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3158462 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.724 07:03:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.WNw 00:20:58.269 00:20:58.269 real 0m23.278s 00:20:58.269 user 0m24.985s 00:20:58.269 sys 0m9.670s 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:58.269 ************************************ 00:20:58.269 END TEST nvmf_fips 00:20:58.269 ************************************ 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:58.269 ************************************ 00:20:58.269 START TEST nvmf_control_msg_list 00:20:58.269 ************************************ 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:58.269 * Looking for test storage... 00:20:58.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.269 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:58.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.270 --rc genhtml_branch_coverage=1 00:20:58.270 --rc genhtml_function_coverage=1 00:20:58.270 --rc genhtml_legend=1 00:20:58.270 --rc geninfo_all_blocks=1 00:20:58.270 --rc geninfo_unexecuted_blocks=1 00:20:58.270 00:20:58.270 ' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:58.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.270 --rc genhtml_branch_coverage=1 00:20:58.270 --rc genhtml_function_coverage=1 00:20:58.270 --rc genhtml_legend=1 00:20:58.270 --rc geninfo_all_blocks=1 00:20:58.270 --rc geninfo_unexecuted_blocks=1 00:20:58.270 00:20:58.270 ' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:58.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.270 --rc genhtml_branch_coverage=1 00:20:58.270 --rc genhtml_function_coverage=1 00:20:58.270 --rc genhtml_legend=1 00:20:58.270 --rc geninfo_all_blocks=1 00:20:58.270 --rc geninfo_unexecuted_blocks=1 00:20:58.270 00:20:58.270 ' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:58.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.270 --rc genhtml_branch_coverage=1 00:20:58.270 --rc genhtml_function_coverage=1 00:20:58.270 --rc genhtml_legend=1 00:20:58.270 --rc geninfo_all_blocks=1 00:20:58.270 --rc geninfo_unexecuted_blocks=1 00:20:58.270 00:20:58.270 ' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:58.270 07:03:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.412 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:06.413 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:06.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:06.413 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:06.413 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:06.413 07:04:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.413 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.413 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.413 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:06.413 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:06.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:21:06.413 00:21:06.413 --- 10.0.0.2 ping statistics --- 00:21:06.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.413 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:21:06.413 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:21:06.413 00:21:06.414 --- 10.0.0.1 ping statistics --- 00:21:06.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.414 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=3165168 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 3165168 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3165168 ']' 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:06.414 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.414 [2024-10-16 07:04:05.182359] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:21:06.414 [2024-10-16 07:04:05.182430] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.414 [2024-10-16 07:04:05.270892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.414 [2024-10-16 07:04:05.322718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.414 [2024-10-16 07:04:05.322768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.414 [2024-10-16 07:04:05.322777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.414 [2024-10-16 07:04:05.322784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.414 [2024-10-16 07:04:05.322791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.414 [2024-10-16 07:04:05.323551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.675 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:06.675 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:21:06.675 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:06.675 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:06.675 07:04:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.675 [2024-10-16 07:04:06.045214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.675 Malloc0 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.675 [2024-10-16 07:04:06.099612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3165503 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3165504 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3165505 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3165503 00:21:06.675 07:04:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:06.937 [2024-10-16 07:04:06.180104] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:06.937 [2024-10-16 07:04:06.190183] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:06.937 [2024-10-16 07:04:06.190433] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:08.323 Initializing NVMe Controllers 00:21:08.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:08.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:08.323 Initialization complete. Launching workers. 00:21:08.323 ======================================================== 00:21:08.323 Latency(us) 00:21:08.323 Device Information : IOPS MiB/s Average min max 00:21:08.323 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1483.00 5.79 674.31 247.01 909.33 00:21:08.323 ======================================================== 00:21:08.323 Total : 1483.00 5.79 674.31 247.01 909.33 00:21:08.323 00:21:08.323 Initializing NVMe Controllers 00:21:08.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:08.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:08.323 Initialization complete. Launching workers. 00:21:08.323 ======================================================== 00:21:08.323 Latency(us) 00:21:08.323 Device Information : IOPS MiB/s Average min max 00:21:08.323 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1461.00 5.71 684.41 251.67 924.78 00:21:08.323 ======================================================== 00:21:08.323 Total : 1461.00 5.71 684.41 251.67 924.78 00:21:08.323 00:21:08.323 [2024-10-16 07:04:07.424634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1dd0 is same with the state(6) to be set 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3165504 00:21:08.323 Initializing NVMe Controllers 00:21:08.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:08.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:08.323 Initialization complete. Launching workers. 00:21:08.323 ======================================================== 00:21:08.323 Latency(us) 00:21:08.323 Device Information : IOPS MiB/s Average min max 00:21:08.323 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40913.10 40751.61 41182.13 00:21:08.323 ======================================================== 00:21:08.323 Total : 25.00 0.10 40913.10 40751.61 41182.13 00:21:08.323 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3165505 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.323 rmmod nvme_tcp 00:21:08.323 rmmod nvme_fabrics 00:21:08.323 rmmod nvme_keyring 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 3165168 ']' 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 3165168 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3165168 ']' 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3165168 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:21:08.323 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3165168 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3165168' 00:21:08.324 killing process with pid 3165168 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3165168 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3165168 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.324 07:04:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.869 07:04:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.869 00:21:10.869 real 0m12.495s 00:21:10.869 user 0m8.176s 00:21:10.869 sys 0m6.697s 00:21:10.869 07:04:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.869 07:04:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.869 ************************************ 00:21:10.869 END TEST nvmf_control_msg_list 00:21:10.869 ************************************ 00:21:10.869 07:04:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:10.869 07:04:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:10.869 07:04:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:10.869 07:04:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.869 ************************************ 00:21:10.869 START TEST nvmf_wait_for_buf 00:21:10.869 ************************************ 00:21:10.869 07:04:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:10.869 * Looking for test storage... 00:21:10.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:10.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.869 --rc genhtml_branch_coverage=1 00:21:10.869 --rc genhtml_function_coverage=1 00:21:10.869 --rc genhtml_legend=1 00:21:10.869 --rc geninfo_all_blocks=1 00:21:10.869 --rc geninfo_unexecuted_blocks=1 00:21:10.869 00:21:10.869 ' 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:10.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.869 --rc genhtml_branch_coverage=1 00:21:10.869 --rc genhtml_function_coverage=1 00:21:10.869 --rc genhtml_legend=1 00:21:10.869 --rc geninfo_all_blocks=1 00:21:10.869 --rc geninfo_unexecuted_blocks=1 00:21:10.869 00:21:10.869 ' 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:10.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.869 --rc genhtml_branch_coverage=1 00:21:10.869 --rc genhtml_function_coverage=1 00:21:10.869 --rc genhtml_legend=1 00:21:10.869 --rc geninfo_all_blocks=1 00:21:10.869 --rc geninfo_unexecuted_blocks=1 00:21:10.869 00:21:10.869 ' 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:10.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.869 --rc genhtml_branch_coverage=1 00:21:10.869 --rc genhtml_function_coverage=1 00:21:10.869 --rc genhtml_legend=1 00:21:10.869 --rc geninfo_all_blocks=1 00:21:10.869 --rc geninfo_unexecuted_blocks=1 00:21:10.869 00:21:10.869 ' 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:10.869 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.870 07:04:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.008 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:19.009 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:19.009 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:19.009 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:19.009 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.009 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:21:19.010 00:21:19.010 --- 10.0.0.2 ping statistics --- 00:21:19.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.010 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:21:19.010 00:21:19.010 --- 10.0.0.1 ping statistics --- 00:21:19.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.010 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=3169850 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 3169850 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3169850 ']' 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.010 07:04:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.010 [2024-10-16 07:04:17.737611] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:21:19.010 [2024-10-16 07:04:17.737683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.010 [2024-10-16 07:04:17.829171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.010 [2024-10-16 07:04:17.880332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.010 [2024-10-16 07:04:17.880389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.010 [2024-10-16 07:04:17.880398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.010 [2024-10-16 07:04:17.880405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.010 [2024-10-16 07:04:17.880411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.010 [2024-10-16 07:04:17.881208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.271 Malloc0 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.271 [2024-10-16 07:04:18.709975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.271 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:19.272 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.272 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.272 [2024-10-16 07:04:18.746298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.272 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.272 07:04:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.535 [2024-10-16 07:04:18.839772] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:20.921 Initializing NVMe Controllers 00:21:20.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:20.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:20.921 Initialization complete. Launching workers. 00:21:20.921 ======================================================== 00:21:20.921 Latency(us) 00:21:20.921 Device Information : IOPS MiB/s Average min max 00:21:20.921 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 26.00 3.25 160688.02 47862.40 191558.77 00:21:20.921 ======================================================== 00:21:20.921 Total : 26.00 3.25 160688.02 47862.40 191558.77 00:21:20.921 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=390 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 390 -eq 0 ]] 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:20.921 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:20.921 rmmod nvme_tcp 00:21:20.921 rmmod nvme_fabrics 00:21:20.921 rmmod nvme_keyring 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 3169850 ']' 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 3169850 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3169850 ']' 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3169850 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3169850 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3169850' 00:21:21.182 killing process with pid 3169850 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3169850 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3169850 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.182 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.183 07:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.728 07:04:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.728 00:21:23.728 real 0m12.818s 00:21:23.728 user 0m5.219s 00:21:23.728 sys 0m6.185s 00:21:23.728 07:04:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:23.728 07:04:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.728 ************************************ 00:21:23.728 END TEST nvmf_wait_for_buf 00:21:23.728 ************************************ 00:21:23.728 07:04:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:23.728 07:04:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:23.728 07:04:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:23.728 07:04:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:23.728 07:04:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.728 07:04:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:31.868 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:31.868 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:31.868 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:31.868 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.869 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:31.869 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:31.869 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.869 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:31.869 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.869 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:31.869 07:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:31.869 07:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:31.869 07:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:31.869 07:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:31.869 ************************************ 00:21:31.869 START TEST nvmf_perf_adq 00:21:31.869 ************************************ 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:31.869 * Looking for test storage... 00:21:31.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:31.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.869 --rc genhtml_branch_coverage=1 00:21:31.869 --rc genhtml_function_coverage=1 00:21:31.869 --rc genhtml_legend=1 00:21:31.869 --rc geninfo_all_blocks=1 00:21:31.869 --rc geninfo_unexecuted_blocks=1 00:21:31.869 00:21:31.869 ' 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:31.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.869 --rc genhtml_branch_coverage=1 00:21:31.869 --rc genhtml_function_coverage=1 00:21:31.869 --rc genhtml_legend=1 00:21:31.869 --rc geninfo_all_blocks=1 00:21:31.869 --rc geninfo_unexecuted_blocks=1 00:21:31.869 00:21:31.869 ' 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:31.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.869 --rc genhtml_branch_coverage=1 00:21:31.869 --rc genhtml_function_coverage=1 00:21:31.869 --rc genhtml_legend=1 00:21:31.869 --rc geninfo_all_blocks=1 00:21:31.869 --rc geninfo_unexecuted_blocks=1 00:21:31.869 00:21:31.869 ' 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:31.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.869 --rc genhtml_branch_coverage=1 00:21:31.869 --rc genhtml_function_coverage=1 00:21:31.869 --rc genhtml_legend=1 00:21:31.869 --rc geninfo_all_blocks=1 00:21:31.869 --rc geninfo_unexecuted_blocks=1 00:21:31.869 00:21:31.869 ' 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.869 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.870 07:04:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:38.457 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:38.458 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:38.458 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:38.458 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:38.458 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:38.458 07:04:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:39.842 07:04:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:41.751 07:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:47.079 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:47.079 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:47.079 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:47.079 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.079 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:47.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:21:47.080 00:21:47.080 --- 10.0.0.2 ping statistics --- 00:21:47.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.080 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:21:47.080 00:21:47.080 --- 10.0.0.1 ping statistics --- 00:21:47.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.080 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3180081 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3180081 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3180081 ']' 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.080 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.080 [2024-10-16 07:04:46.463589] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:21:47.080 [2024-10-16 07:04:46.463654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.080 [2024-10-16 07:04:46.545759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.368 [2024-10-16 07:04:46.608482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.368 [2024-10-16 07:04:46.608548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.368 [2024-10-16 07:04:46.608558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.368 [2024-10-16 07:04:46.608567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.368 [2024-10-16 07:04:46.608574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.368 [2024-10-16 07:04:46.611111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.368 [2024-10-16 07:04:46.611275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.368 [2024-10-16 07:04:46.611438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.368 [2024-10-16 07:04:46.611440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.368 [2024-10-16 07:04:46.836922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.368 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.629 Malloc1 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.629 [2024-10-16 07:04:46.921093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3180156 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:47.629 07:04:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:49.539 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:49.539 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.539 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.539 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.539 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:49.539 "tick_rate": 2400000000, 00:21:49.539 "poll_groups": [ 00:21:49.539 { 00:21:49.539 "name": "nvmf_tgt_poll_group_000", 00:21:49.539 "admin_qpairs": 1, 00:21:49.539 "io_qpairs": 1, 00:21:49.539 "current_admin_qpairs": 1, 00:21:49.539 "current_io_qpairs": 1, 00:21:49.539 "pending_bdev_io": 0, 00:21:49.539 "completed_nvme_io": 15378, 00:21:49.539 "transports": [ 00:21:49.539 { 00:21:49.539 "trtype": "TCP" 00:21:49.539 } 00:21:49.539 ] 00:21:49.539 }, 00:21:49.539 { 00:21:49.539 "name": "nvmf_tgt_poll_group_001", 00:21:49.539 "admin_qpairs": 0, 00:21:49.539 "io_qpairs": 1, 00:21:49.539 "current_admin_qpairs": 0, 00:21:49.539 "current_io_qpairs": 1, 00:21:49.539 "pending_bdev_io": 0, 00:21:49.539 "completed_nvme_io": 15757, 00:21:49.539 "transports": [ 00:21:49.539 { 00:21:49.539 "trtype": "TCP" 00:21:49.539 } 00:21:49.539 ] 00:21:49.539 }, 00:21:49.539 { 00:21:49.539 "name": "nvmf_tgt_poll_group_002", 00:21:49.539 "admin_qpairs": 0, 00:21:49.539 "io_qpairs": 1, 00:21:49.539 "current_admin_qpairs": 0, 00:21:49.539 "current_io_qpairs": 1, 00:21:49.539 "pending_bdev_io": 0, 00:21:49.539 "completed_nvme_io": 16600, 00:21:49.539 "transports": [ 00:21:49.539 { 00:21:49.539 "trtype": "TCP" 00:21:49.539 } 00:21:49.539 ] 00:21:49.539 }, 00:21:49.539 { 00:21:49.539 "name": "nvmf_tgt_poll_group_003", 00:21:49.539 "admin_qpairs": 0, 00:21:49.539 "io_qpairs": 1, 00:21:49.539 "current_admin_qpairs": 0, 00:21:49.539 "current_io_qpairs": 1, 00:21:49.539 "pending_bdev_io": 0, 00:21:49.539 "completed_nvme_io": 15517, 00:21:49.539 "transports": [ 00:21:49.539 { 00:21:49.539 "trtype": "TCP" 00:21:49.539 } 00:21:49.539 ] 00:21:49.539 } 00:21:49.539 ] 00:21:49.539 }' 00:21:49.539 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:49.539 07:04:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:49.539 07:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:49.539 07:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:49.539 07:04:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3180156 00:21:57.676 Initializing NVMe Controllers 00:21:57.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:57.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:57.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:57.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:57.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:57.676 Initialization complete. Launching workers. 00:21:57.676 ======================================================== 00:21:57.676 Latency(us) 00:21:57.676 Device Information : IOPS MiB/s Average min max 00:21:57.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13307.40 51.98 4809.28 1274.64 12454.61 00:21:57.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12628.30 49.33 5067.60 1543.21 13717.28 00:21:57.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11719.50 45.78 5461.57 1241.37 14125.63 00:21:57.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12340.20 48.20 5186.49 1168.97 13073.17 00:21:57.676 ======================================================== 00:21:57.676 Total : 49995.39 195.29 5120.54 1168.97 14125.63 00:21:57.676 00:21:57.676 [2024-10-16 07:04:57.038346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4210 is same with the state(6) to be set 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.676 rmmod nvme_tcp 00:21:57.676 rmmod nvme_fabrics 00:21:57.676 rmmod nvme_keyring 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3180081 ']' 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3180081 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3180081 ']' 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3180081 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.676 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3180081 00:21:57.954 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.954 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.954 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3180081' 00:21:57.954 killing process with pid 3180081 00:21:57.954 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3180081 00:21:57.954 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3180081 00:21:57.954 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:57.954 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:57.954 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:57.954 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:57.955 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:57.955 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:57.955 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:57.955 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.955 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.955 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.955 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.955 07:04:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.500 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.500 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:00.500 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:00.500 07:04:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:01.885 07:05:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:03.798 07:05:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.089 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:09.090 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:09.090 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:09.090 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:09.090 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:22:09.090 00:22:09.090 --- 10.0.0.2 ping statistics --- 00:22:09.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.090 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:22:09.090 00:22:09.090 --- 10.0.0.1 ping statistics --- 00:22:09.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.090 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:09.090 net.core.busy_poll = 1 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:09.090 net.core.busy_read = 1 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:09.090 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3184852 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3184852 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3184852 ']' 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.353 07:05:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.353 [2024-10-16 07:05:08.821258] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:22:09.353 [2024-10-16 07:05:08.821331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.614 [2024-10-16 07:05:08.909819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.614 [2024-10-16 07:05:08.963069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.614 [2024-10-16 07:05:08.963124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.614 [2024-10-16 07:05:08.963132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.614 [2024-10-16 07:05:08.963139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.614 [2024-10-16 07:05:08.963146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.614 [2024-10-16 07:05:08.965202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.614 [2024-10-16 07:05:08.965362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.614 [2024-10-16 07:05:08.965524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.614 [2024-10-16 07:05:08.965524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.187 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.187 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:10.187 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:10.187 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.187 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 [2024-10-16 07:05:09.849729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 Malloc1 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 [2024-10-16 07:05:09.925985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3184952 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:10.449 07:05:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:12.998 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:12.998 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.998 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.998 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.998 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:12.998 "tick_rate": 2400000000, 00:22:12.998 "poll_groups": [ 00:22:12.998 { 00:22:12.998 "name": "nvmf_tgt_poll_group_000", 00:22:12.999 "admin_qpairs": 1, 00:22:12.999 "io_qpairs": 3, 00:22:12.999 "current_admin_qpairs": 1, 00:22:12.999 "current_io_qpairs": 3, 00:22:12.999 "pending_bdev_io": 0, 00:22:12.999 "completed_nvme_io": 27674, 00:22:12.999 "transports": [ 00:22:12.999 { 00:22:12.999 "trtype": "TCP" 00:22:12.999 } 00:22:12.999 ] 00:22:12.999 }, 00:22:12.999 { 00:22:12.999 "name": "nvmf_tgt_poll_group_001", 00:22:12.999 "admin_qpairs": 0, 00:22:12.999 "io_qpairs": 1, 00:22:12.999 "current_admin_qpairs": 0, 00:22:12.999 "current_io_qpairs": 1, 00:22:12.999 "pending_bdev_io": 0, 00:22:12.999 "completed_nvme_io": 26767, 00:22:12.999 "transports": [ 00:22:12.999 { 00:22:12.999 "trtype": "TCP" 00:22:12.999 } 00:22:12.999 ] 00:22:12.999 }, 00:22:12.999 { 00:22:12.999 "name": "nvmf_tgt_poll_group_002", 00:22:12.999 "admin_qpairs": 0, 00:22:12.999 "io_qpairs": 0, 00:22:12.999 "current_admin_qpairs": 0, 00:22:12.999 "current_io_qpairs": 0, 00:22:12.999 "pending_bdev_io": 0, 00:22:12.999 "completed_nvme_io": 0, 00:22:12.999 "transports": [ 00:22:12.999 { 00:22:12.999 "trtype": "TCP" 00:22:12.999 } 00:22:12.999 ] 00:22:12.999 }, 00:22:12.999 { 00:22:12.999 "name": "nvmf_tgt_poll_group_003", 00:22:12.999 "admin_qpairs": 0, 00:22:12.999 "io_qpairs": 0, 00:22:12.999 "current_admin_qpairs": 0, 00:22:12.999 "current_io_qpairs": 0, 00:22:12.999 "pending_bdev_io": 0, 00:22:12.999 "completed_nvme_io": 0, 00:22:12.999 "transports": [ 00:22:12.999 { 00:22:12.999 "trtype": "TCP" 00:22:12.999 } 00:22:12.999 ] 00:22:12.999 } 00:22:12.999 ] 00:22:12.999 }' 00:22:12.999 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:12.999 07:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:12.999 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:12.999 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:12.999 07:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3184952 00:22:21.136 Initializing NVMe Controllers 00:22:21.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:21.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:21.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:21.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:21.136 Initialization complete. Launching workers. 00:22:21.136 ======================================================== 00:22:21.136 Latency(us) 00:22:21.136 Device Information : IOPS MiB/s Average min max 00:22:21.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 17598.70 68.74 3636.55 1007.82 45933.58 00:22:21.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6111.80 23.87 10504.82 1399.49 58880.85 00:22:21.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7106.90 27.76 9009.59 1327.00 59470.37 00:22:21.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6664.80 26.03 9602.71 1347.03 61528.49 00:22:21.136 ======================================================== 00:22:21.136 Total : 37482.20 146.41 6836.11 1007.82 61528.49 00:22:21.136 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.136 rmmod nvme_tcp 00:22:21.136 rmmod nvme_fabrics 00:22:21.136 rmmod nvme_keyring 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3184852 ']' 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3184852 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3184852 ']' 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3184852 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3184852 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3184852' 00:22:21.136 killing process with pid 3184852 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3184852 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3184852 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.136 07:05:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:24.435 00:22:24.435 real 0m53.485s 00:22:24.435 user 2m47.274s 00:22:24.435 sys 0m11.460s 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.435 ************************************ 00:22:24.435 END TEST nvmf_perf_adq 00:22:24.435 ************************************ 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:24.435 ************************************ 00:22:24.435 START TEST nvmf_shutdown 00:22:24.435 ************************************ 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:24.435 * Looking for test storage... 00:22:24.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.435 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:24.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.436 --rc genhtml_branch_coverage=1 00:22:24.436 --rc genhtml_function_coverage=1 00:22:24.436 --rc genhtml_legend=1 00:22:24.436 --rc geninfo_all_blocks=1 00:22:24.436 --rc geninfo_unexecuted_blocks=1 00:22:24.436 00:22:24.436 ' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:24.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.436 --rc genhtml_branch_coverage=1 00:22:24.436 --rc genhtml_function_coverage=1 00:22:24.436 --rc genhtml_legend=1 00:22:24.436 --rc geninfo_all_blocks=1 00:22:24.436 --rc geninfo_unexecuted_blocks=1 00:22:24.436 00:22:24.436 ' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:24.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.436 --rc genhtml_branch_coverage=1 00:22:24.436 --rc genhtml_function_coverage=1 00:22:24.436 --rc genhtml_legend=1 00:22:24.436 --rc geninfo_all_blocks=1 00:22:24.436 --rc geninfo_unexecuted_blocks=1 00:22:24.436 00:22:24.436 ' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:24.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.436 --rc genhtml_branch_coverage=1 00:22:24.436 --rc genhtml_function_coverage=1 00:22:24.436 --rc genhtml_legend=1 00:22:24.436 --rc geninfo_all_blocks=1 00:22:24.436 --rc geninfo_unexecuted_blocks=1 00:22:24.436 00:22:24.436 ' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:24.436 ************************************ 00:22:24.436 START TEST nvmf_shutdown_tc1 00:22:24.436 ************************************ 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.436 07:05:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:32.585 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:32.585 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:32.585 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:32.585 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:32.586 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:22:32.586 00:22:32.586 --- 10.0.0.2 ping statistics --- 00:22:32.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.586 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:22:32.586 00:22:32.586 --- 10.0.0.1 ping statistics --- 00:22:32.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.586 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=3191550 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 3191550 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3191550 ']' 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.586 07:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.586 [2024-10-16 07:05:31.567740] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:22:32.586 [2024-10-16 07:05:31.567814] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.586 [2024-10-16 07:05:31.657456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.586 [2024-10-16 07:05:31.710544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.586 [2024-10-16 07:05:31.710599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.586 [2024-10-16 07:05:31.710608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.586 [2024-10-16 07:05:31.710615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.586 [2024-10-16 07:05:31.710621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.586 [2024-10-16 07:05:31.712713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.586 [2024-10-16 07:05:31.712893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.586 [2024-10-16 07:05:31.713096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:32.586 [2024-10-16 07:05:31.713096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.160 [2024-10-16 07:05:32.441151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.160 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.160 Malloc1 00:22:33.160 [2024-10-16 07:05:32.563451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.160 Malloc2 00:22:33.160 Malloc3 00:22:33.422 Malloc4 00:22:33.422 Malloc5 00:22:33.422 Malloc6 00:22:33.422 Malloc7 00:22:33.422 Malloc8 00:22:33.422 Malloc9 00:22:33.685 Malloc10 00:22:33.685 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.685 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:33.685 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.685 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3191813 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3191813 /var/tmp/bdevperf.sock 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3191813 ']' 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.685 { 00:22:33.685 "params": { 00:22:33.685 "name": "Nvme$subsystem", 00:22:33.685 "trtype": "$TEST_TRANSPORT", 00:22:33.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.685 "adrfam": "ipv4", 00:22:33.685 "trsvcid": "$NVMF_PORT", 00:22:33.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.685 "hdgst": ${hdgst:-false}, 00:22:33.685 "ddgst": ${ddgst:-false} 00:22:33.685 }, 00:22:33.685 "method": "bdev_nvme_attach_controller" 00:22:33.685 } 00:22:33.685 EOF 00:22:33.685 )") 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.685 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.685 { 00:22:33.685 "params": { 00:22:33.686 "name": "Nvme$subsystem", 00:22:33.686 "trtype": "$TEST_TRANSPORT", 00:22:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.686 "adrfam": "ipv4", 00:22:33.686 "trsvcid": "$NVMF_PORT", 00:22:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.686 "hdgst": ${hdgst:-false}, 00:22:33.686 "ddgst": ${ddgst:-false} 00:22:33.686 }, 00:22:33.686 "method": "bdev_nvme_attach_controller" 00:22:33.686 } 00:22:33.686 EOF 00:22:33.686 )") 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.686 { 00:22:33.686 "params": { 00:22:33.686 "name": "Nvme$subsystem", 00:22:33.686 "trtype": "$TEST_TRANSPORT", 00:22:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.686 "adrfam": "ipv4", 00:22:33.686 "trsvcid": "$NVMF_PORT", 00:22:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.686 "hdgst": ${hdgst:-false}, 00:22:33.686 "ddgst": ${ddgst:-false} 00:22:33.686 }, 00:22:33.686 "method": "bdev_nvme_attach_controller" 00:22:33.686 } 00:22:33.686 EOF 00:22:33.686 )") 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.686 { 00:22:33.686 "params": { 00:22:33.686 "name": "Nvme$subsystem", 00:22:33.686 "trtype": "$TEST_TRANSPORT", 00:22:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.686 "adrfam": "ipv4", 00:22:33.686 "trsvcid": "$NVMF_PORT", 00:22:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.686 "hdgst": ${hdgst:-false}, 00:22:33.686 "ddgst": ${ddgst:-false} 00:22:33.686 }, 00:22:33.686 "method": "bdev_nvme_attach_controller" 00:22:33.686 } 00:22:33.686 EOF 00:22:33.686 )") 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.686 { 00:22:33.686 "params": { 00:22:33.686 "name": "Nvme$subsystem", 00:22:33.686 "trtype": "$TEST_TRANSPORT", 00:22:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.686 "adrfam": "ipv4", 00:22:33.686 "trsvcid": "$NVMF_PORT", 00:22:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.686 "hdgst": ${hdgst:-false}, 00:22:33.686 "ddgst": ${ddgst:-false} 00:22:33.686 }, 00:22:33.686 "method": "bdev_nvme_attach_controller" 00:22:33.686 } 00:22:33.686 EOF 00:22:33.686 )") 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.686 { 00:22:33.686 "params": { 00:22:33.686 "name": "Nvme$subsystem", 00:22:33.686 "trtype": "$TEST_TRANSPORT", 00:22:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.686 "adrfam": "ipv4", 00:22:33.686 "trsvcid": "$NVMF_PORT", 00:22:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.686 "hdgst": ${hdgst:-false}, 00:22:33.686 "ddgst": ${ddgst:-false} 00:22:33.686 }, 00:22:33.686 "method": "bdev_nvme_attach_controller" 00:22:33.686 } 00:22:33.686 EOF 00:22:33.686 )") 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.686 { 00:22:33.686 "params": { 00:22:33.686 "name": "Nvme$subsystem", 00:22:33.686 "trtype": "$TEST_TRANSPORT", 00:22:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.686 "adrfam": "ipv4", 00:22:33.686 "trsvcid": "$NVMF_PORT", 00:22:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.686 "hdgst": ${hdgst:-false}, 00:22:33.686 "ddgst": ${ddgst:-false} 00:22:33.686 }, 00:22:33.686 "method": "bdev_nvme_attach_controller" 00:22:33.686 } 00:22:33.686 EOF 00:22:33.686 )") 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.686 [2024-10-16 07:05:33.093816] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:22:33.686 [2024-10-16 07:05:33.093900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.686 { 00:22:33.686 "params": { 00:22:33.686 "name": "Nvme$subsystem", 00:22:33.686 "trtype": "$TEST_TRANSPORT", 00:22:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.686 "adrfam": "ipv4", 00:22:33.686 "trsvcid": "$NVMF_PORT", 00:22:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.686 "hdgst": ${hdgst:-false}, 00:22:33.686 "ddgst": ${ddgst:-false} 00:22:33.686 }, 00:22:33.686 "method": "bdev_nvme_attach_controller" 00:22:33.686 } 00:22:33.686 EOF 00:22:33.686 )") 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.686 { 00:22:33.686 "params": { 00:22:33.686 "name": "Nvme$subsystem", 00:22:33.686 "trtype": "$TEST_TRANSPORT", 00:22:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.686 "adrfam": "ipv4", 00:22:33.686 "trsvcid": "$NVMF_PORT", 00:22:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.686 "hdgst": ${hdgst:-false}, 00:22:33.686 "ddgst": ${ddgst:-false} 00:22:33.686 }, 00:22:33.686 "method": "bdev_nvme_attach_controller" 00:22:33.686 } 00:22:33.686 EOF 00:22:33.686 )") 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.686 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.686 { 00:22:33.686 "params": { 00:22:33.686 "name": "Nvme$subsystem", 00:22:33.686 "trtype": "$TEST_TRANSPORT", 00:22:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.686 "adrfam": "ipv4", 00:22:33.686 "trsvcid": "$NVMF_PORT", 00:22:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.686 "hdgst": ${hdgst:-false}, 00:22:33.686 "ddgst": ${ddgst:-false} 00:22:33.686 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 } 00:22:33.687 EOF 00:22:33.687 )") 00:22:33.687 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.687 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:33.687 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:33.687 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme1", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 },{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme2", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 },{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme3", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 },{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme4", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 },{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme5", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 },{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme6", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 },{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme7", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 },{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme8", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 },{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme9", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 },{ 00:22:33.687 "params": { 00:22:33.687 "name": "Nvme10", 00:22:33.687 "trtype": "tcp", 00:22:33.687 "traddr": "10.0.0.2", 00:22:33.687 "adrfam": "ipv4", 00:22:33.687 "trsvcid": "4420", 00:22:33.687 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:33.687 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:33.687 "hdgst": false, 00:22:33.687 "ddgst": false 00:22:33.687 }, 00:22:33.687 "method": "bdev_nvme_attach_controller" 00:22:33.687 }' 00:22:33.687 [2024-10-16 07:05:33.180687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.949 [2024-10-16 07:05:33.234501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.336 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.336 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:35.336 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:35.336 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.336 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.336 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.336 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3191813 00:22:35.336 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:35.336 07:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:36.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3191813 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3191550 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.279 { 00:22:36.279 "params": { 00:22:36.279 "name": "Nvme$subsystem", 00:22:36.279 "trtype": "$TEST_TRANSPORT", 00:22:36.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.279 "adrfam": "ipv4", 00:22:36.279 "trsvcid": "$NVMF_PORT", 00:22:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.279 "hdgst": ${hdgst:-false}, 00:22:36.279 "ddgst": ${ddgst:-false} 00:22:36.279 }, 00:22:36.279 "method": "bdev_nvme_attach_controller" 00:22:36.279 } 00:22:36.279 EOF 00:22:36.279 )") 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.279 { 00:22:36.279 "params": { 00:22:36.279 "name": "Nvme$subsystem", 00:22:36.279 "trtype": "$TEST_TRANSPORT", 00:22:36.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.279 "adrfam": "ipv4", 00:22:36.279 "trsvcid": "$NVMF_PORT", 00:22:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.279 "hdgst": ${hdgst:-false}, 00:22:36.279 "ddgst": ${ddgst:-false} 00:22:36.279 }, 00:22:36.279 "method": "bdev_nvme_attach_controller" 00:22:36.279 } 00:22:36.279 EOF 00:22:36.279 )") 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.279 { 00:22:36.279 "params": { 00:22:36.279 "name": "Nvme$subsystem", 00:22:36.279 "trtype": "$TEST_TRANSPORT", 00:22:36.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.279 "adrfam": "ipv4", 00:22:36.279 "trsvcid": "$NVMF_PORT", 00:22:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.279 "hdgst": ${hdgst:-false}, 00:22:36.279 "ddgst": ${ddgst:-false} 00:22:36.279 }, 00:22:36.279 "method": "bdev_nvme_attach_controller" 00:22:36.279 } 00:22:36.279 EOF 00:22:36.279 )") 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.279 { 00:22:36.279 "params": { 00:22:36.279 "name": "Nvme$subsystem", 00:22:36.279 "trtype": "$TEST_TRANSPORT", 00:22:36.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.279 "adrfam": "ipv4", 00:22:36.279 "trsvcid": "$NVMF_PORT", 00:22:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.279 "hdgst": ${hdgst:-false}, 00:22:36.279 "ddgst": ${ddgst:-false} 00:22:36.279 }, 00:22:36.279 "method": "bdev_nvme_attach_controller" 00:22:36.279 } 00:22:36.279 EOF 00:22:36.279 )") 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.279 { 00:22:36.279 "params": { 00:22:36.279 "name": "Nvme$subsystem", 00:22:36.279 "trtype": "$TEST_TRANSPORT", 00:22:36.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.279 "adrfam": "ipv4", 00:22:36.279 "trsvcid": "$NVMF_PORT", 00:22:36.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.279 "hdgst": ${hdgst:-false}, 00:22:36.279 "ddgst": ${ddgst:-false} 00:22:36.279 }, 00:22:36.279 "method": "bdev_nvme_attach_controller" 00:22:36.279 } 00:22:36.279 EOF 00:22:36.279 )") 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.279 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.279 { 00:22:36.279 "params": { 00:22:36.279 "name": "Nvme$subsystem", 00:22:36.279 "trtype": "$TEST_TRANSPORT", 00:22:36.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.279 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "$NVMF_PORT", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.280 "hdgst": ${hdgst:-false}, 00:22:36.280 "ddgst": ${ddgst:-false} 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 } 00:22:36.280 EOF 00:22:36.280 )") 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.280 [2024-10-16 07:05:35.543924] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:22:36.280 [2024-10-16 07:05:35.543980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192477 ] 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.280 { 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme$subsystem", 00:22:36.280 "trtype": "$TEST_TRANSPORT", 00:22:36.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "$NVMF_PORT", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.280 "hdgst": ${hdgst:-false}, 00:22:36.280 "ddgst": ${ddgst:-false} 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 } 00:22:36.280 EOF 00:22:36.280 )") 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.280 { 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme$subsystem", 00:22:36.280 "trtype": "$TEST_TRANSPORT", 00:22:36.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "$NVMF_PORT", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.280 "hdgst": ${hdgst:-false}, 00:22:36.280 "ddgst": ${ddgst:-false} 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 } 00:22:36.280 EOF 00:22:36.280 )") 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.280 { 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme$subsystem", 00:22:36.280 "trtype": "$TEST_TRANSPORT", 00:22:36.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "$NVMF_PORT", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.280 "hdgst": ${hdgst:-false}, 00:22:36.280 "ddgst": ${ddgst:-false} 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 } 00:22:36.280 EOF 00:22:36.280 )") 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.280 { 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme$subsystem", 00:22:36.280 "trtype": "$TEST_TRANSPORT", 00:22:36.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "$NVMF_PORT", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.280 "hdgst": ${hdgst:-false}, 00:22:36.280 "ddgst": ${ddgst:-false} 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 } 00:22:36.280 EOF 00:22:36.280 )") 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:36.280 07:05:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme1", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 },{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme2", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 },{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme3", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 },{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme4", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 },{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme5", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 },{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme6", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 },{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme7", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 },{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme8", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 },{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme9", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 },{ 00:22:36.280 "params": { 00:22:36.280 "name": "Nvme10", 00:22:36.280 "trtype": "tcp", 00:22:36.280 "traddr": "10.0.0.2", 00:22:36.280 "adrfam": "ipv4", 00:22:36.280 "trsvcid": "4420", 00:22:36.280 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:36.280 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:36.280 "hdgst": false, 00:22:36.280 "ddgst": false 00:22:36.280 }, 00:22:36.280 "method": "bdev_nvme_attach_controller" 00:22:36.280 }' 00:22:36.280 [2024-10-16 07:05:35.625374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.280 [2024-10-16 07:05:35.662371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.665 Running I/O for 1 seconds... 00:22:39.051 1865.00 IOPS, 116.56 MiB/s 00:22:39.051 Latency(us) 00:22:39.051 [2024-10-16T05:05:38.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.051 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme1n1 : 1.15 222.90 13.93 0.00 0.00 284184.96 17585.49 248162.99 00:22:39.051 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme2n1 : 1.14 224.65 14.04 0.00 0.00 277397.33 34297.17 232434.35 00:22:39.051 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme3n1 : 1.12 231.65 14.48 0.00 0.00 247271.51 3345.07 256901.12 00:22:39.051 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme4n1 : 1.14 223.73 13.98 0.00 0.00 269000.96 15400.96 249910.61 00:22:39.051 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme5n1 : 1.18 271.41 16.96 0.00 0.00 217429.33 15837.87 239424.85 00:22:39.051 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme6n1 : 1.18 270.81 16.93 0.00 0.00 214917.97 19988.48 244667.73 00:22:39.051 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme7n1 : 1.13 226.24 14.14 0.00 0.00 251669.33 16602.45 251658.24 00:22:39.051 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme8n1 : 1.19 269.85 16.87 0.00 0.00 208281.94 15619.41 241172.48 00:22:39.051 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme9n1 : 1.18 221.62 13.85 0.00 0.00 247844.46 4478.29 260396.37 00:22:39.051 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:39.051 Verification LBA range: start 0x0 length 0x400 00:22:39.051 Nvme10n1 : 1.20 266.30 16.64 0.00 0.00 203893.08 11031.89 262144.00 00:22:39.051 [2024-10-16T05:05:38.550Z] =================================================================================================================== 00:22:39.051 [2024-10-16T05:05:38.550Z] Total : 2429.16 151.82 0.00 0.00 239389.00 3345.07 262144.00 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.051 rmmod nvme_tcp 00:22:39.051 rmmod nvme_fabrics 00:22:39.051 rmmod nvme_keyring 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 3191550 ']' 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 3191550 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3191550 ']' 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3191550 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.051 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3191550 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3191550' 00:22:39.312 killing process with pid 3191550 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3191550 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3191550 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.312 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.862 00:22:41.862 real 0m17.042s 00:22:41.862 user 0m34.407s 00:22:41.862 sys 0m7.049s 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.862 ************************************ 00:22:41.862 END TEST nvmf_shutdown_tc1 00:22:41.862 ************************************ 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:41.862 ************************************ 00:22:41.862 START TEST nvmf_shutdown_tc2 00:22:41.862 ************************************ 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:41.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:41.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.862 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:41.863 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:41.863 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:41.863 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:41.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:22:41.863 00:22:41.863 --- 10.0.0.2 ping statistics --- 00:22:41.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.863 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:22:41.863 00:22:41.863 --- 10.0.0.1 ping statistics --- 00:22:41.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.863 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3193595 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3193595 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3193595 ']' 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.863 07:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.125 [2024-10-16 07:05:41.399059] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:22:42.125 [2024-10-16 07:05:41.399122] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.125 [2024-10-16 07:05:41.483219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.125 [2024-10-16 07:05:41.513814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.125 [2024-10-16 07:05:41.513841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.125 [2024-10-16 07:05:41.513856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.125 [2024-10-16 07:05:41.513861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.125 [2024-10-16 07:05:41.513865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.125 [2024-10-16 07:05:41.515279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.125 [2024-10-16 07:05:41.515435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.125 [2024-10-16 07:05:41.515587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.125 [2024-10-16 07:05:41.515590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.697 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.697 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:42.697 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:42.697 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.697 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.958 [2024-10-16 07:05:42.223366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.958 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.958 Malloc1 00:22:42.958 [2024-10-16 07:05:42.334685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.958 Malloc2 00:22:42.958 Malloc3 00:22:42.958 Malloc4 00:22:43.218 Malloc5 00:22:43.218 Malloc6 00:22:43.218 Malloc7 00:22:43.218 Malloc8 00:22:43.218 Malloc9 00:22:43.218 Malloc10 00:22:43.218 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.218 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:43.219 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.219 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.480 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3193974 00:22:43.480 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3193974 /var/tmp/bdevperf.sock 00:22:43.480 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3193974 ']' 00:22:43.480 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.481 { 00:22:43.481 "params": { 00:22:43.481 "name": "Nvme$subsystem", 00:22:43.481 "trtype": "$TEST_TRANSPORT", 00:22:43.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.481 "adrfam": "ipv4", 00:22:43.481 "trsvcid": "$NVMF_PORT", 00:22:43.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.481 "hdgst": ${hdgst:-false}, 00:22:43.481 "ddgst": ${ddgst:-false} 00:22:43.481 }, 00:22:43.481 "method": "bdev_nvme_attach_controller" 00:22:43.481 } 00:22:43.481 EOF 00:22:43.481 )") 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.481 { 00:22:43.481 "params": { 00:22:43.481 "name": "Nvme$subsystem", 00:22:43.481 "trtype": "$TEST_TRANSPORT", 00:22:43.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.481 "adrfam": "ipv4", 00:22:43.481 "trsvcid": "$NVMF_PORT", 00:22:43.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.481 "hdgst": ${hdgst:-false}, 00:22:43.481 "ddgst": ${ddgst:-false} 00:22:43.481 }, 00:22:43.481 "method": "bdev_nvme_attach_controller" 00:22:43.481 } 00:22:43.481 EOF 00:22:43.481 )") 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.481 { 00:22:43.481 "params": { 00:22:43.481 "name": "Nvme$subsystem", 00:22:43.481 "trtype": "$TEST_TRANSPORT", 00:22:43.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.481 "adrfam": "ipv4", 00:22:43.481 "trsvcid": "$NVMF_PORT", 00:22:43.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.481 "hdgst": ${hdgst:-false}, 00:22:43.481 "ddgst": ${ddgst:-false} 00:22:43.481 }, 00:22:43.481 "method": "bdev_nvme_attach_controller" 00:22:43.481 } 00:22:43.481 EOF 00:22:43.481 )") 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.481 { 00:22:43.481 "params": { 00:22:43.481 "name": "Nvme$subsystem", 00:22:43.481 "trtype": "$TEST_TRANSPORT", 00:22:43.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.481 "adrfam": "ipv4", 00:22:43.481 "trsvcid": "$NVMF_PORT", 00:22:43.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.481 "hdgst": ${hdgst:-false}, 00:22:43.481 "ddgst": ${ddgst:-false} 00:22:43.481 }, 00:22:43.481 "method": "bdev_nvme_attach_controller" 00:22:43.481 } 00:22:43.481 EOF 00:22:43.481 )") 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.481 { 00:22:43.481 "params": { 00:22:43.481 "name": "Nvme$subsystem", 00:22:43.481 "trtype": "$TEST_TRANSPORT", 00:22:43.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.481 "adrfam": "ipv4", 00:22:43.481 "trsvcid": "$NVMF_PORT", 00:22:43.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.481 "hdgst": ${hdgst:-false}, 00:22:43.481 "ddgst": ${ddgst:-false} 00:22:43.481 }, 00:22:43.481 "method": "bdev_nvme_attach_controller" 00:22:43.481 } 00:22:43.481 EOF 00:22:43.481 )") 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.481 { 00:22:43.481 "params": { 00:22:43.481 "name": "Nvme$subsystem", 00:22:43.481 "trtype": "$TEST_TRANSPORT", 00:22:43.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.481 "adrfam": "ipv4", 00:22:43.481 "trsvcid": "$NVMF_PORT", 00:22:43.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.481 "hdgst": ${hdgst:-false}, 00:22:43.481 "ddgst": ${ddgst:-false} 00:22:43.481 }, 00:22:43.481 "method": "bdev_nvme_attach_controller" 00:22:43.481 } 00:22:43.481 EOF 00:22:43.481 )") 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.481 [2024-10-16 07:05:42.777659] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:22:43.481 [2024-10-16 07:05:42.777714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193974 ] 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.481 { 00:22:43.481 "params": { 00:22:43.481 "name": "Nvme$subsystem", 00:22:43.481 "trtype": "$TEST_TRANSPORT", 00:22:43.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.481 "adrfam": "ipv4", 00:22:43.481 "trsvcid": "$NVMF_PORT", 00:22:43.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.481 "hdgst": ${hdgst:-false}, 00:22:43.481 "ddgst": ${ddgst:-false} 00:22:43.481 }, 00:22:43.481 "method": "bdev_nvme_attach_controller" 00:22:43.481 } 00:22:43.481 EOF 00:22:43.481 )") 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.481 { 00:22:43.481 "params": { 00:22:43.481 "name": "Nvme$subsystem", 00:22:43.481 "trtype": "$TEST_TRANSPORT", 00:22:43.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.481 "adrfam": "ipv4", 00:22:43.481 "trsvcid": "$NVMF_PORT", 00:22:43.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.481 "hdgst": ${hdgst:-false}, 00:22:43.481 "ddgst": ${ddgst:-false} 00:22:43.481 }, 00:22:43.481 "method": "bdev_nvme_attach_controller" 00:22:43.481 } 00:22:43.481 EOF 00:22:43.481 )") 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.481 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.481 { 00:22:43.481 "params": { 00:22:43.481 "name": "Nvme$subsystem", 00:22:43.481 "trtype": "$TEST_TRANSPORT", 00:22:43.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "$NVMF_PORT", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.482 "hdgst": ${hdgst:-false}, 00:22:43.482 "ddgst": ${ddgst:-false} 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 } 00:22:43.482 EOF 00:22:43.482 )") 00:22:43.482 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.482 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.482 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.482 { 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme$subsystem", 00:22:43.482 "trtype": "$TEST_TRANSPORT", 00:22:43.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "$NVMF_PORT", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.482 "hdgst": ${hdgst:-false}, 00:22:43.482 "ddgst": ${ddgst:-false} 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 } 00:22:43.482 EOF 00:22:43.482 )") 00:22:43.482 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.482 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:43.482 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:43.482 07:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme1", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 },{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme2", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 },{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme3", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 },{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme4", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 },{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme5", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 },{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme6", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 },{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme7", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 },{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme8", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 },{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme9", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 },{ 00:22:43.482 "params": { 00:22:43.482 "name": "Nvme10", 00:22:43.482 "trtype": "tcp", 00:22:43.482 "traddr": "10.0.0.2", 00:22:43.482 "adrfam": "ipv4", 00:22:43.482 "trsvcid": "4420", 00:22:43.482 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:43.482 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:43.482 "hdgst": false, 00:22:43.482 "ddgst": false 00:22:43.482 }, 00:22:43.482 "method": "bdev_nvme_attach_controller" 00:22:43.482 }' 00:22:43.482 [2024-10-16 07:05:42.857751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.482 [2024-10-16 07:05:42.894039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.869 Running I/O for 10 seconds... 00:22:44.869 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.869 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:44.869 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:44.869 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.869 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:45.129 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:45.390 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:45.650 07:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3193974 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3193974 ']' 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3193974 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3193974 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3193974' 00:22:45.650 killing process with pid 3193974 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3193974 00:22:45.650 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3193974 00:22:45.911 Received shutdown signal, test time was about 0.960685 seconds 00:22:45.911 00:22:45.911 Latency(us) 00:22:45.911 [2024-10-16T05:05:45.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.911 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme1n1 : 0.91 210.40 13.15 0.00 0.00 300487.68 37355.52 244667.73 00:22:45.911 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme2n1 : 0.96 266.72 16.67 0.00 0.00 232120.32 19223.89 276125.01 00:22:45.911 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme3n1 : 0.95 269.76 16.86 0.00 0.00 224677.97 16165.55 248162.99 00:22:45.911 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme4n1 : 0.95 268.24 16.77 0.00 0.00 220997.76 20862.29 227191.47 00:22:45.911 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme5n1 : 0.93 206.22 12.89 0.00 0.00 280693.19 28617.39 237677.23 00:22:45.911 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme6n1 : 0.92 207.79 12.99 0.00 0.00 271459.84 19005.44 267386.88 00:22:45.911 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme7n1 : 0.96 267.98 16.75 0.00 0.00 206192.64 17039.36 213210.45 00:22:45.911 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme8n1 : 0.94 275.39 17.21 0.00 0.00 195209.90 5679.79 251658.24 00:22:45.911 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme9n1 : 0.95 203.16 12.70 0.00 0.00 258899.06 23156.05 269134.51 00:22:45.911 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.911 Verification LBA range: start 0x0 length 0x400 00:22:45.911 Nvme10n1 : 0.94 203.62 12.73 0.00 0.00 252218.88 19770.03 246415.36 00:22:45.911 [2024-10-16T05:05:45.410Z] =================================================================================================================== 00:22:45.911 [2024-10-16T05:05:45.410Z] Total : 2379.28 148.71 0.00 0.00 240150.33 5679.79 276125.01 00:22:45.911 07:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3193595 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.855 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.855 rmmod nvme_tcp 00:22:46.855 rmmod nvme_fabrics 00:22:47.115 rmmod nvme_keyring 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 3193595 ']' 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 3193595 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3193595 ']' 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3193595 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3193595 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3193595' 00:22:47.115 killing process with pid 3193595 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3193595 00:22:47.115 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3193595 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.376 07:05:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.302 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.302 00:22:49.302 real 0m7.797s 00:22:49.302 user 0m23.369s 00:22:49.302 sys 0m1.278s 00:22:49.302 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:49.302 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.302 ************************************ 00:22:49.302 END TEST nvmf_shutdown_tc2 00:22:49.302 ************************************ 00:22:49.302 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:49.302 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:49.302 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:49.302 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.564 ************************************ 00:22:49.564 START TEST nvmf_shutdown_tc3 00:22:49.564 ************************************ 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.564 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:49.565 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:49.565 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:49.565 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:49.565 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.565 07:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.565 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.565 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.565 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.565 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:22:49.827 00:22:49.827 --- 10.0.0.2 ping statistics --- 00:22:49.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.827 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:22:49.827 00:22:49.827 --- 10.0.0.1 ping statistics --- 00:22:49.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.827 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3195232 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3195232 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3195232 ']' 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.827 07:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.827 [2024-10-16 07:05:49.285611] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:22:49.827 [2024-10-16 07:05:49.285675] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.188 [2024-10-16 07:05:49.373596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.188 [2024-10-16 07:05:49.415238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.188 [2024-10-16 07:05:49.415280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.188 [2024-10-16 07:05:49.415287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.188 [2024-10-16 07:05:49.415292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.188 [2024-10-16 07:05:49.415296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.188 [2024-10-16 07:05:49.416870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.188 [2024-10-16 07:05:49.416969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.188 [2024-10-16 07:05:49.417284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.188 [2024-10-16 07:05:49.417283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.825 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.825 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:50.825 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.826 [2024-10-16 07:05:50.143386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.826 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.826 Malloc1 00:22:50.826 [2024-10-16 07:05:50.252577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.826 Malloc2 00:22:50.826 Malloc3 00:22:51.087 Malloc4 00:22:51.087 Malloc5 00:22:51.087 Malloc6 00:22:51.087 Malloc7 00:22:51.087 Malloc8 00:22:51.087 Malloc9 00:22:51.087 Malloc10 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3195507 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3195507 /var/tmp/bdevperf.sock 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3195507 ']' 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.348 { 00:22:51.348 "params": { 00:22:51.348 "name": "Nvme$subsystem", 00:22:51.348 "trtype": "$TEST_TRANSPORT", 00:22:51.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.348 "adrfam": "ipv4", 00:22:51.348 "trsvcid": "$NVMF_PORT", 00:22:51.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.348 "hdgst": ${hdgst:-false}, 00:22:51.348 "ddgst": ${ddgst:-false} 00:22:51.348 }, 00:22:51.348 "method": "bdev_nvme_attach_controller" 00:22:51.348 } 00:22:51.348 EOF 00:22:51.348 )") 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.348 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.348 { 00:22:51.348 "params": { 00:22:51.348 "name": "Nvme$subsystem", 00:22:51.348 "trtype": "$TEST_TRANSPORT", 00:22:51.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.348 "adrfam": "ipv4", 00:22:51.348 "trsvcid": "$NVMF_PORT", 00:22:51.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.348 "hdgst": ${hdgst:-false}, 00:22:51.348 "ddgst": ${ddgst:-false} 00:22:51.348 }, 00:22:51.348 "method": "bdev_nvme_attach_controller" 00:22:51.348 } 00:22:51.348 EOF 00:22:51.348 )") 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.349 { 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme$subsystem", 00:22:51.349 "trtype": "$TEST_TRANSPORT", 00:22:51.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "$NVMF_PORT", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.349 "hdgst": ${hdgst:-false}, 00:22:51.349 "ddgst": ${ddgst:-false} 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 } 00:22:51.349 EOF 00:22:51.349 )") 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.349 { 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme$subsystem", 00:22:51.349 "trtype": "$TEST_TRANSPORT", 00:22:51.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "$NVMF_PORT", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.349 "hdgst": ${hdgst:-false}, 00:22:51.349 "ddgst": ${ddgst:-false} 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 } 00:22:51.349 EOF 00:22:51.349 )") 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.349 { 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme$subsystem", 00:22:51.349 "trtype": "$TEST_TRANSPORT", 00:22:51.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "$NVMF_PORT", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.349 "hdgst": ${hdgst:-false}, 00:22:51.349 "ddgst": ${ddgst:-false} 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 } 00:22:51.349 EOF 00:22:51.349 )") 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.349 { 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme$subsystem", 00:22:51.349 "trtype": "$TEST_TRANSPORT", 00:22:51.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "$NVMF_PORT", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.349 "hdgst": ${hdgst:-false}, 00:22:51.349 "ddgst": ${ddgst:-false} 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 } 00:22:51.349 EOF 00:22:51.349 )") 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.349 { 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme$subsystem", 00:22:51.349 "trtype": "$TEST_TRANSPORT", 00:22:51.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "$NVMF_PORT", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.349 "hdgst": ${hdgst:-false}, 00:22:51.349 "ddgst": ${ddgst:-false} 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 } 00:22:51.349 EOF 00:22:51.349 )") 00:22:51.349 [2024-10-16 07:05:50.702386] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:22:51.349 [2024-10-16 07:05:50.702440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195507 ] 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.349 { 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme$subsystem", 00:22:51.349 "trtype": "$TEST_TRANSPORT", 00:22:51.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "$NVMF_PORT", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.349 "hdgst": ${hdgst:-false}, 00:22:51.349 "ddgst": ${ddgst:-false} 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 } 00:22:51.349 EOF 00:22:51.349 )") 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.349 { 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme$subsystem", 00:22:51.349 "trtype": "$TEST_TRANSPORT", 00:22:51.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "$NVMF_PORT", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.349 "hdgst": ${hdgst:-false}, 00:22:51.349 "ddgst": ${ddgst:-false} 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 } 00:22:51.349 EOF 00:22:51.349 )") 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:51.349 { 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme$subsystem", 00:22:51.349 "trtype": "$TEST_TRANSPORT", 00:22:51.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "$NVMF_PORT", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.349 "hdgst": ${hdgst:-false}, 00:22:51.349 "ddgst": ${ddgst:-false} 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 } 00:22:51.349 EOF 00:22:51.349 )") 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:51.349 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme1", 00:22:51.349 "trtype": "tcp", 00:22:51.349 "traddr": "10.0.0.2", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "4420", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.349 "hdgst": false, 00:22:51.349 "ddgst": false 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 },{ 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme2", 00:22:51.349 "trtype": "tcp", 00:22:51.349 "traddr": "10.0.0.2", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "4420", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.349 "hdgst": false, 00:22:51.349 "ddgst": false 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 },{ 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme3", 00:22:51.349 "trtype": "tcp", 00:22:51.349 "traddr": "10.0.0.2", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "4420", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.349 "hdgst": false, 00:22:51.349 "ddgst": false 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 },{ 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme4", 00:22:51.349 "trtype": "tcp", 00:22:51.349 "traddr": "10.0.0.2", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "4420", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.349 "hdgst": false, 00:22:51.349 "ddgst": false 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 },{ 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme5", 00:22:51.349 "trtype": "tcp", 00:22:51.349 "traddr": "10.0.0.2", 00:22:51.349 "adrfam": "ipv4", 00:22:51.349 "trsvcid": "4420", 00:22:51.349 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.349 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.349 "hdgst": false, 00:22:51.349 "ddgst": false 00:22:51.349 }, 00:22:51.349 "method": "bdev_nvme_attach_controller" 00:22:51.349 },{ 00:22:51.349 "params": { 00:22:51.349 "name": "Nvme6", 00:22:51.349 "trtype": "tcp", 00:22:51.349 "traddr": "10.0.0.2", 00:22:51.349 "adrfam": "ipv4", 00:22:51.350 "trsvcid": "4420", 00:22:51.350 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.350 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.350 "hdgst": false, 00:22:51.350 "ddgst": false 00:22:51.350 }, 00:22:51.350 "method": "bdev_nvme_attach_controller" 00:22:51.350 },{ 00:22:51.350 "params": { 00:22:51.350 "name": "Nvme7", 00:22:51.350 "trtype": "tcp", 00:22:51.350 "traddr": "10.0.0.2", 00:22:51.350 "adrfam": "ipv4", 00:22:51.350 "trsvcid": "4420", 00:22:51.350 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.350 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.350 "hdgst": false, 00:22:51.350 "ddgst": false 00:22:51.350 }, 00:22:51.350 "method": "bdev_nvme_attach_controller" 00:22:51.350 },{ 00:22:51.350 "params": { 00:22:51.350 "name": "Nvme8", 00:22:51.350 "trtype": "tcp", 00:22:51.350 "traddr": "10.0.0.2", 00:22:51.350 "adrfam": "ipv4", 00:22:51.350 "trsvcid": "4420", 00:22:51.350 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.350 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.350 "hdgst": false, 00:22:51.350 "ddgst": false 00:22:51.350 }, 00:22:51.350 "method": "bdev_nvme_attach_controller" 00:22:51.350 },{ 00:22:51.350 "params": { 00:22:51.350 "name": "Nvme9", 00:22:51.350 "trtype": "tcp", 00:22:51.350 "traddr": "10.0.0.2", 00:22:51.350 "adrfam": "ipv4", 00:22:51.350 "trsvcid": "4420", 00:22:51.350 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.350 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.350 "hdgst": false, 00:22:51.350 "ddgst": false 00:22:51.350 }, 00:22:51.350 "method": "bdev_nvme_attach_controller" 00:22:51.350 },{ 00:22:51.350 "params": { 00:22:51.350 "name": "Nvme10", 00:22:51.350 "trtype": "tcp", 00:22:51.350 "traddr": "10.0.0.2", 00:22:51.350 "adrfam": "ipv4", 00:22:51.350 "trsvcid": "4420", 00:22:51.350 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.350 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.350 "hdgst": false, 00:22:51.350 "ddgst": false 00:22:51.350 }, 00:22:51.350 "method": "bdev_nvme_attach_controller" 00:22:51.350 }' 00:22:51.350 [2024-10-16 07:05:50.782788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.350 [2024-10-16 07:05:50.819344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.261 Running I/O for 10 seconds... 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:53.261 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:53.522 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.791 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.791 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3195232 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3195232 ']' 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3195232 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3195232 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3195232' 00:22:53.792 killing process with pid 3195232 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3195232 00:22:53.792 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3195232 00:22:53.792 [2024-10-16 07:05:53.208312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.792 [2024-10-16 07:05:53.208587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.208660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1620 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.209996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.793 [2024-10-16 07:05:53.210080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.210150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a41b0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.211354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1af0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.211367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1af0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.794 [2024-10-16 07:05:53.212764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.212841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1fc0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.213999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a24b0 is same with the state(6) to be set 00:22:53.795 [2024-10-16 07:05:53.214559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2980 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.214576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2980 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.214581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2980 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.215379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2e50 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3320 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3320 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3320 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.796 [2024-10-16 07:05:53.216809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.216997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3810 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.797 [2024-10-16 07:05:53.217632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.217757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.798 [2024-10-16 07:05:53.226934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.226971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.226989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.226998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.798 [2024-10-16 07:05:53.227457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.798 [2024-10-16 07:05:53.227472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.227986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.227993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.228002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.228009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.228019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.228027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.228036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.228043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.228053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-10-16 07:05:53.228043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 the state(6) to be set 00:22:53.799 [2024-10-16 07:05:53.228065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.228069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.799 [2024-10-16 07:05:53.228075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.799 [2024-10-16 07:05:53.228079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.799 [2024-10-16 07:05:53.228082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.228087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3ce0 is same with the state(6) to be set 00:22:53.799 [2024-10-16 07:05:53.228113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.799 [2024-10-16 07:05:53.228157] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10c9820 was disconnected and freed. reset controller. 00:22:53.799 [2024-10-16 07:05:53.228325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.799 [2024-10-16 07:05:53.228342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.228351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.799 [2024-10-16 07:05:53.228358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.228367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.799 [2024-10-16 07:05:53.228374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.799 [2024-10-16 07:05:53.228382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2d70 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.228423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb9a50 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.228506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e3cf0 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.228598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11132a0 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.228687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed030 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.228773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdb610 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.228866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0270 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.228955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.228988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.228996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.229011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11145e0 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.229037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.229045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.229063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.229079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.229095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd10 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.229125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.229134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.229150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.229165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.800 [2024-10-16 07:05:53.229181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.800 [2024-10-16 07:05:53.229188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc07f0 is same with the state(6) to be set 00:22:53.800 [2024-10-16 07:05:53.229230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.800 [2024-10-16 07:05:53.229239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.229723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.229732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.237880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.237929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.237941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.237953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.237963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.237975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.237984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.237995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.238003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.238013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.238020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.238030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.801 [2024-10-16 07:05:53.238037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.801 [2024-10-16 07:05:53.238047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238601] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11f2940 was disconnected and freed. reset controller. 00:22:53.802 [2024-10-16 07:05:53.238659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.802 [2024-10-16 07:05:53.238905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.802 [2024-10-16 07:05:53.238913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.238922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.238930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.238941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.238949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.238959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.238966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.238976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.238983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.238993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.803 [2024-10-16 07:05:53.239625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.803 [2024-10-16 07:05:53.239632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.239642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.239649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.239658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.239666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.239675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.239683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.239692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.239699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.239709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.239716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.239726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.239733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.239742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.239749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.239759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.239767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.239820] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfff2c0 was disconnected and freed. reset controller. 00:22:53.804 [2024-10-16 07:05:53.240030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.804 [2024-10-16 07:05:53.240600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.804 [2024-10-16 07:05:53.240607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.240984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.240991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.241008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.241024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.241041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.241058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.241075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.241092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.241109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.241127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.805 [2024-10-16 07:05:53.241146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.805 [2024-10-16 07:05:53.241197] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10c6dc0 was disconnected and freed. reset controller. 00:22:53.805 [2024-10-16 07:05:53.242615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc2d70 (9): Bad file descriptor 00:22:53.805 [2024-10-16 07:05:53.242646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb9a50 (9): Bad file descriptor 00:22:53.805 [2024-10-16 07:05:53.242661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e3cf0 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.242678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11132a0 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.242692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ed030 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.242707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdb610 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.242721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc0270 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.242734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11145e0 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.242748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd10 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.242767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc07f0 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.246605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:53.806 [2024-10-16 07:05:53.246636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:53.806 [2024-10-16 07:05:53.247200] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.806 [2024-10-16 07:05:53.247228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:53.806 [2024-10-16 07:05:53.247243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:53.806 [2024-10-16 07:05:53.247644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.806 [2024-10-16 07:05:53.247660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11145e0 with addr=10.0.0.2, port=4420 00:22:53.806 [2024-10-16 07:05:53.247669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11145e0 is same with the state(6) to be set 00:22:53.806 [2024-10-16 07:05:53.248075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.806 [2024-10-16 07:05:53.248115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc2d70 with addr=10.0.0.2, port=4420 00:22:53.806 [2024-10-16 07:05:53.248126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2d70 is same with the state(6) to be set 00:22:53.806 [2024-10-16 07:05:53.248714] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.806 [2024-10-16 07:05:53.248760] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.806 [2024-10-16 07:05:53.249063] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.806 [2024-10-16 07:05:53.249114] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.806 [2024-10-16 07:05:53.249515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.806 [2024-10-16 07:05:53.249532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb9a50 with addr=10.0.0.2, port=4420 00:22:53.806 [2024-10-16 07:05:53.249541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb9a50 is same with the state(6) to be set 00:22:53.806 [2024-10-16 07:05:53.249834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.806 [2024-10-16 07:05:53.249849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ed030 with addr=10.0.0.2, port=4420 00:22:53.806 [2024-10-16 07:05:53.249858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed030 is same with the state(6) to be set 00:22:53.806 [2024-10-16 07:05:53.249869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11145e0 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.249880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc2d70 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.249980] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.806 [2024-10-16 07:05:53.250000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb9a50 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.250011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ed030 (9): Bad file descriptor 00:22:53.806 [2024-10-16 07:05:53.250020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:53.806 [2024-10-16 07:05:53.250027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:53.806 [2024-10-16 07:05:53.250036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:53.806 [2024-10-16 07:05:53.250051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.806 [2024-10-16 07:05:53.250058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:53.806 [2024-10-16 07:05:53.250065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.806 [2024-10-16 07:05:53.250122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.806 [2024-10-16 07:05:53.250131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.806 [2024-10-16 07:05:53.250138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:53.806 [2024-10-16 07:05:53.250144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:53.806 [2024-10-16 07:05:53.250151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:53.806 [2024-10-16 07:05:53.250162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:53.806 [2024-10-16 07:05:53.250169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:53.806 [2024-10-16 07:05:53.250176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:53.806 [2024-10-16 07:05:53.250213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.806 [2024-10-16 07:05:53.250220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.806 [2024-10-16 07:05:53.252718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.252986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.252994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.253005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.253013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.253023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.253030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.253040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.253047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.253057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.253065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.253074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.806 [2024-10-16 07:05:53.253082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.806 [2024-10-16 07:05:53.253092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.807 [2024-10-16 07:05:53.253620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.807 [2024-10-16 07:05:53.253628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.253861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.253870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac640 is same with the state(6) to be set 00:22:53.808 [2024-10-16 07:05:53.255166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.808 [2024-10-16 07:05:53.255456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.808 [2024-10-16 07:05:53.255463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.809 [2024-10-16 07:05:53.255978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.809 [2024-10-16 07:05:53.255988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.255995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.256314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.256323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9a00 is same with the state(6) to be set 00:22:53.810 [2024-10-16 07:05:53.257605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.810 [2024-10-16 07:05:53.257807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.810 [2024-10-16 07:05:53.257814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.257987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.257995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.811 [2024-10-16 07:05:53.258238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.811 [2024-10-16 07:05:53.258248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.258746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.258754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c82f0 is same with the state(6) to be set 00:22:53.812 [2024-10-16 07:05:53.260037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.260050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.812 [2024-10-16 07:05:53.260061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.812 [2024-10-16 07:05:53.260069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.813 [2024-10-16 07:05:53.260713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.813 [2024-10-16 07:05:53.260720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.260983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.260993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.261161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.261170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200ea70 is same with the state(6) to be set 00:22:53.814 [2024-10-16 07:05:53.262435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.814 [2024-10-16 07:05:53.262708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.814 [2024-10-16 07:05:53.262716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.262983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.262993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.815 [2024-10-16 07:05:53.263447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.815 [2024-10-16 07:05:53.263455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.263467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.263475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.263485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.263492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.263502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.263510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.263520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.263528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.263538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.263545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.263555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.263562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.263570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eaa40 is same with the state(6) to be set 00:22:53.816 [2024-10-16 07:05:53.264840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.264858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.264872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.264881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.264893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.264900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.264911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.264918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.264928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.264935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.264945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.264953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.264965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.264973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.264982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.264989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.264999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.816 [2024-10-16 07:05:53.265347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.816 [2024-10-16 07:05:53.265356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.817 [2024-10-16 07:05:53.265943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.817 [2024-10-16 07:05:53.265952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ebf50 is same with the state(6) to be set 00:22:53.817 [2024-10-16 07:05:53.267471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:53.817 [2024-10-16 07:05:53.267496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:53.817 [2024-10-16 07:05:53.267507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:53.817 [2024-10-16 07:05:53.267517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:53.817 [2024-10-16 07:05:53.267604] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.817 [2024-10-16 07:05:53.267618] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.817 [2024-10-16 07:05:53.267697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:54.080 task offset: 26496 on job bdev=Nvme7n1 fails 00:22:54.080 00:22:54.080 Latency(us) 00:22:54.080 [2024-10-16T05:05:53.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.080 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme1n1 ended in about 0.96 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme1n1 : 0.96 200.35 12.52 66.78 0.00 236911.36 17913.17 230686.72 00:22:54.080 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme2n1 ended in about 0.96 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme2n1 : 0.96 200.11 12.51 66.70 0.00 232425.39 17476.27 246415.36 00:22:54.080 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme3n1 : 0.97 198.07 12.38 66.02 0.00 230273.71 19005.44 263891.63 00:22:54.080 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme4n1 ended in about 0.97 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme4n1 : 0.97 197.57 12.35 65.86 0.00 226079.89 13216.43 227191.47 00:22:54.080 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme5n1 ended in about 0.96 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme5n1 : 0.96 199.85 12.49 66.62 0.00 218552.53 18022.40 251658.24 00:22:54.080 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme6n1 ended in about 0.97 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme6n1 : 0.97 131.38 8.21 65.69 0.00 289655.18 20862.29 265639.25 00:22:54.080 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme7n1 ended in about 0.96 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme7n1 : 0.96 200.68 12.54 66.89 0.00 208121.39 14636.37 244667.73 00:22:54.080 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme8n1 ended in about 0.98 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme8n1 : 0.98 196.59 12.29 65.53 0.00 208388.69 32768.00 210589.01 00:22:54.080 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme9n1 : 0.98 130.74 8.17 65.37 0.00 272444.59 14854.83 256901.12 00:22:54.080 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.080 Job: Nvme10n1 ended in about 0.98 seconds with error 00:22:54.080 Verification LBA range: start 0x0 length 0x400 00:22:54.080 Nvme10n1 : 0.98 130.42 8.15 65.21 0.00 266896.78 21408.43 276125.01 00:22:54.080 [2024-10-16T05:05:53.579Z] =================================================================================================================== 00:22:54.080 [2024-10-16T05:05:53.579Z] Total : 1785.77 111.61 660.68 0.00 235945.99 13216.43 276125.01 00:22:54.081 [2024-10-16 07:05:53.292087] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:54.081 [2024-10-16 07:05:53.292135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:54.081 [2024-10-16 07:05:53.292486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.292507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc07f0 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.292518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc07f0 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.292696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.292708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ecd10 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.292717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ecd10 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.293203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.293249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbdb610 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.293260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdb610 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.293550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.293562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc0270 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.293569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc0270 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.295200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.081 [2024-10-16 07:05:53.295218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:54.081 [2024-10-16 07:05:53.295227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:54.081 [2024-10-16 07:05:53.295237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:54.081 [2024-10-16 07:05:53.295502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.295517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e3cf0 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.295525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e3cf0 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.295828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.295838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11132a0 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.295848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11132a0 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.295861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc07f0 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.295872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ecd10 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.295882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdb610 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.295891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc0270 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.295928] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.081 [2024-10-16 07:05:53.295939] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.081 [2024-10-16 07:05:53.295950] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.081 [2024-10-16 07:05:53.295962] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.081 [2024-10-16 07:05:53.296337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.296351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc2d70 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.296358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc2d70 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.296650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.296660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11145e0 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.296667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11145e0 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.297078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.297121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ed030 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.297133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ed030 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.297308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.081 [2024-10-16 07:05:53.297320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb9a50 with addr=10.0.0.2, port=4420 00:22:54.081 [2024-10-16 07:05:53.297328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb9a50 is same with the state(6) to be set 00:22:54.081 [2024-10-16 07:05:53.297339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e3cf0 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.297350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11132a0 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.297359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 [2024-10-16 07:05:53.297539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 [2024-10-16 07:05:53.297546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 [2024-10-16 07:05:53.297552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 [2024-10-16 07:05:53.297560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc2d70 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.297569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11145e0 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.297578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ed030 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.297588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb9a50 (9): Bad file descriptor 00:22:54.081 [2024-10-16 07:05:53.297596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 [2024-10-16 07:05:53.297673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 [2024-10-16 07:05:53.297679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:54.081 [2024-10-16 07:05:53.297755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:54.081 [2024-10-16 07:05:53.297762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:54.081 [2024-10-16 07:05:53.297791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 [2024-10-16 07:05:53.297798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 [2024-10-16 07:05:53.297805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 [2024-10-16 07:05:53.297812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.081 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3195507 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3195507 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3195507 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:55.025 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:55.025 rmmod nvme_tcp 00:22:55.025 rmmod nvme_fabrics 00:22:55.287 rmmod nvme_keyring 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 3195232 ']' 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 3195232 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3195232 ']' 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3195232 00:22:55.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3195232) - No such process 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3195232 is not found' 00:22:55.287 Process with pid 3195232 is not found 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.287 07:05:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.200 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.200 00:22:57.200 real 0m7.807s 00:22:57.200 user 0m19.073s 00:22:57.200 sys 0m1.304s 00:22:57.200 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:57.200 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.200 ************************************ 00:22:57.200 END TEST nvmf_shutdown_tc3 00:22:57.200 ************************************ 00:22:57.200 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:57.200 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:57.200 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:57.200 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:57.200 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:57.200 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:57.461 ************************************ 00:22:57.461 START TEST nvmf_shutdown_tc4 00:22:57.461 ************************************ 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:57.461 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:57.462 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:57.462 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:57.462 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:57.462 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.462 07:05:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:22:57.723 00:22:57.723 --- 10.0.0.2 ping statistics --- 00:22:57.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.723 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:22:57.723 00:22:57.723 --- 10.0.0.1 ping statistics --- 00:22:57.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.723 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.723 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=3196957 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 3196957 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3196957 ']' 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.724 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.724 [2024-10-16 07:05:57.180587] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:22:57.724 [2024-10-16 07:05:57.180652] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.024 [2024-10-16 07:05:57.267335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.024 [2024-10-16 07:05:57.303180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.024 [2024-10-16 07:05:57.303224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.024 [2024-10-16 07:05:57.303231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.024 [2024-10-16 07:05:57.303235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.024 [2024-10-16 07:05:57.303239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.024 [2024-10-16 07:05:57.304823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.024 [2024-10-16 07:05:57.304978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.024 [2024-10-16 07:05:57.305182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.024 [2024-10-16 07:05:57.305182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:58.594 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.594 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:58.594 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:58.594 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.594 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.594 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.594 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.594 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.594 07:05:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.594 [2024-10-16 07:05:58.004430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.594 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.863 Malloc1 00:22:58.863 [2024-10-16 07:05:58.114543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.863 Malloc2 00:22:58.863 Malloc3 00:22:58.863 Malloc4 00:22:58.863 Malloc5 00:22:58.863 Malloc6 00:22:58.863 Malloc7 00:22:59.121 Malloc8 00:22:59.121 Malloc9 00:22:59.121 Malloc10 00:22:59.121 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.121 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:59.121 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.121 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:59.121 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3197333 00:22:59.121 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:59.121 07:05:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:59.122 [2024-10-16 07:05:58.587540] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3196957 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3196957 ']' 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3196957 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3196957 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3196957' 00:23:04.409 killing process with pid 3196957 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3196957 00:23:04.409 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3196957 00:23:04.409 [2024-10-16 07:06:03.591819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2350 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.591869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2350 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.591876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2350 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.591881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2350 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.591886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2350 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.591891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2350 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.591896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2350 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.591901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2350 is same with the state(6) to be set 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 [2024-10-16 07:06:03.592403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2840 is same with the state(6) to be set 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 [2024-10-16 07:06:03.592432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2840 is same with the state(6) to be set 00:23:04.409 starting I/O failed: -6 00:23:04.409 [2024-10-16 07:06:03.592441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2840 is same with the state(6) to be set 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 [2024-10-16 07:06:03.592448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2840 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.592455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2840 is same with the state(6) to be set 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 [2024-10-16 07:06:03.592463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2840 is same with the state(6) to be set 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 [2024-10-16 07:06:03.592596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1990 is same with the state(6) to be set 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 [2024-10-16 07:06:03.592618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1990 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.592625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1990 is same with the state(6) to be set 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 [2024-10-16 07:06:03.592630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1990 is same with the state(6) to be set 00:23:04.409 [2024-10-16 07:06:03.592635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1990 is same with the state(6) to be set 00:23:04.409 starting I/O failed: -6 00:23:04.409 [2024-10-16 07:06:03.592639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1990 is same with the state(6) to be set 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 [2024-10-16 07:06:03.592645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1990 is same with the state(6) to be set 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 [2024-10-16 07:06:03.592872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.409 starting I/O failed: -6 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 Write completed with error (sct=0, sc=8) 00:23:04.409 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 [2024-10-16 07:06:03.593829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.410 starting I/O failed: -6 00:23:04.410 starting I/O failed: -6 00:23:04.410 starting I/O failed: -6 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 [2024-10-16 07:06:03.594973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.410 starting I/O failed: -6 00:23:04.410 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 [2024-10-16 07:06:03.596524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.411 NVMe io qpair process completion error 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 [2024-10-16 07:06:03.597561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 [2024-10-16 07:06:03.598351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 [2024-10-16 07:06:03.599256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.411 Write completed with error (sct=0, sc=8) 00:23:04.411 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 [2024-10-16 07:06:03.599557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ac60 is same with Write completed with error (sct=0, sc=8) 00:23:04.412 the state(6) to be set 00:23:04.412 starting I/O failed: -6 00:23:04.412 [2024-10-16 07:06:03.599583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ac60 is same with Write completed with error (sct=0, sc=8) 00:23:04.412 the state(6) to be set 00:23:04.412 starting I/O failed: -6 00:23:04.412 [2024-10-16 07:06:03.599592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ac60 is same with the state(6) to be set 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 [2024-10-16 07:06:03.599600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ac60 is same with the state(6) to be set 00:23:04.412 starting I/O failed: -6 00:23:04.412 [2024-10-16 07:06:03.599608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ac60 is same with the state(6) to be set 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 [2024-10-16 07:06:03.600887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.412 NVMe io qpair process completion error 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 [2024-10-16 07:06:03.601931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.412 starting I/O failed: -6 00:23:04.412 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 [2024-10-16 07:06:03.602814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 [2024-10-16 07:06:03.603725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.413 starting I/O failed: -6 00:23:04.413 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 [2024-10-16 07:06:03.605320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.414 NVMe io qpair process completion error 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 [2024-10-16 07:06:03.606413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 [2024-10-16 07:06:03.607238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 [2024-10-16 07:06:03.608353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.414 starting I/O failed: -6 00:23:04.414 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 [2024-10-16 07:06:03.611071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.415 NVMe io qpair process completion error 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 [2024-10-16 07:06:03.612403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 [2024-10-16 07:06:03.613236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 Write completed with error (sct=0, sc=8) 00:23:04.415 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 [2024-10-16 07:06:03.614154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 [2024-10-16 07:06:03.615616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.416 NVMe io qpair process completion error 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 starting I/O failed: -6 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.416 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 [2024-10-16 07:06:03.616708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 [2024-10-16 07:06:03.617613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 [2024-10-16 07:06:03.618512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.417 Write completed with error (sct=0, sc=8) 00:23:04.417 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 [2024-10-16 07:06:03.620142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.418 NVMe io qpair process completion error 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 [2024-10-16 07:06:03.621336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 [2024-10-16 07:06:03.622160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.418 Write completed with error (sct=0, sc=8) 00:23:04.418 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 [2024-10-16 07:06:03.623083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 [2024-10-16 07:06:03.625498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.419 NVMe io qpair process completion error 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 starting I/O failed: -6 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.419 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 [2024-10-16 07:06:03.626554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 [2024-10-16 07:06:03.627392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 [2024-10-16 07:06:03.628343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.420 Write completed with error (sct=0, sc=8) 00:23:04.420 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 [2024-10-16 07:06:03.630225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.421 NVMe io qpair process completion error 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 [2024-10-16 07:06:03.631524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 [2024-10-16 07:06:03.632335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 Write completed with error (sct=0, sc=8) 00:23:04.421 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 [2024-10-16 07:06:03.633274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 [2024-10-16 07:06:03.635552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.422 NVMe io qpair process completion error 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 [2024-10-16 07:06:03.636719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 Write completed with error (sct=0, sc=8) 00:23:04.422 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 [2024-10-16 07:06:03.637526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.423 starting I/O failed: -6 00:23:04.423 starting I/O failed: -6 00:23:04.423 starting I/O failed: -6 00:23:04.423 starting I/O failed: -6 00:23:04.423 starting I/O failed: -6 00:23:04.423 starting I/O failed: -6 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 [2024-10-16 07:06:03.638875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.423 starting I/O failed: -6 00:23:04.423 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 Write completed with error (sct=0, sc=8) 00:23:04.424 starting I/O failed: -6 00:23:04.424 [2024-10-16 07:06:03.640515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.424 NVMe io qpair process completion error 00:23:04.424 Initializing NVMe Controllers 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:04.424 Controller IO queue size 128, less than required. 00:23:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:04.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:04.424 Initialization complete. Launching workers. 00:23:04.424 ======================================================== 00:23:04.424 Latency(us) 00:23:04.424 Device Information : IOPS MiB/s Average min max 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1905.59 81.88 67189.06 825.99 130761.07 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1887.87 81.12 67854.25 721.68 133042.23 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1881.96 80.87 67397.00 620.28 128189.55 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1874.15 80.53 67693.91 805.70 125561.88 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1822.25 78.30 69641.98 896.87 125675.96 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1869.30 80.32 67911.80 685.05 127093.65 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1873.94 80.52 67782.27 671.78 126557.30 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1852.00 79.58 68602.55 816.83 126772.71 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1877.74 80.68 67684.21 720.20 126416.60 00:23:04.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1877.53 80.68 67723.11 710.79 119299.18 00:23:04.424 ======================================================== 00:23:04.424 Total : 18722.32 804.47 67940.89 620.28 133042.23 00:23:04.424 00:23:04.424 [2024-10-16 07:06:03.643271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7e60 is same with the state(6) to be set 00:23:04.424 [2024-10-16 07:06:03.643314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8190 is same with the state(6) to be set 00:23:04.424 [2024-10-16 07:06:03.643343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf37f0 is same with the state(6) to be set 00:23:04.424 [2024-10-16 07:06:03.643371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3bb0 is same with the state(6) to be set 00:23:04.424 [2024-10-16 07:06:03.643401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1960 is same with the state(6) to be set 00:23:04.424 [2024-10-16 07:06:03.643430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf39d0 is same with the state(6) to be set 00:23:04.424 [2024-10-16 07:06:03.643458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1c90 is same with the state(6) to be set 00:23:04.424 [2024-10-16 07:06:03.643487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1630 is same with the state(6) to be set 00:23:04.424 [2024-10-16 07:06:03.643513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf1fc0 is same with the state(6) to be set 00:23:04.424 [2024-10-16 07:06:03.643540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf84c0 is same with the state(6) to be set 00:23:04.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:04.424 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3197333 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3197333 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3197333 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.368 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.368 rmmod nvme_tcp 00:23:05.630 rmmod nvme_fabrics 00:23:05.630 rmmod nvme_keyring 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 3196957 ']' 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 3196957 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3196957 ']' 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3196957 00:23:05.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3196957) - No such process 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3196957 is not found' 00:23:05.630 Process with pid 3196957 is not found 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.630 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.544 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.544 00:23:07.544 real 0m10.275s 00:23:07.544 user 0m27.948s 00:23:07.544 sys 0m3.940s 00:23:07.544 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.544 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.544 ************************************ 00:23:07.544 END TEST nvmf_shutdown_tc4 00:23:07.544 ************************************ 00:23:07.806 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:07.806 00:23:07.806 real 0m43.496s 00:23:07.806 user 1m45.061s 00:23:07.806 sys 0m13.916s 00:23:07.806 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.806 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.806 ************************************ 00:23:07.806 END TEST nvmf_shutdown 00:23:07.806 ************************************ 00:23:07.806 07:06:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:07.806 00:23:07.806 real 12m46.545s 00:23:07.806 user 27m6.624s 00:23:07.806 sys 3m48.224s 00:23:07.806 07:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.806 07:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:07.806 ************************************ 00:23:07.806 END TEST nvmf_target_extra 00:23:07.806 ************************************ 00:23:07.806 07:06:07 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:07.806 07:06:07 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:07.806 07:06:07 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.806 07:06:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.806 ************************************ 00:23:07.806 START TEST nvmf_host 00:23:07.806 ************************************ 00:23:07.806 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:07.806 * Looking for test storage... 00:23:07.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:07.806 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.068 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:08.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.069 --rc genhtml_branch_coverage=1 00:23:08.069 --rc genhtml_function_coverage=1 00:23:08.069 --rc genhtml_legend=1 00:23:08.069 --rc geninfo_all_blocks=1 00:23:08.069 --rc geninfo_unexecuted_blocks=1 00:23:08.069 00:23:08.069 ' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:08.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.069 --rc genhtml_branch_coverage=1 00:23:08.069 --rc genhtml_function_coverage=1 00:23:08.069 --rc genhtml_legend=1 00:23:08.069 --rc geninfo_all_blocks=1 00:23:08.069 --rc geninfo_unexecuted_blocks=1 00:23:08.069 00:23:08.069 ' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:08.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.069 --rc genhtml_branch_coverage=1 00:23:08.069 --rc genhtml_function_coverage=1 00:23:08.069 --rc genhtml_legend=1 00:23:08.069 --rc geninfo_all_blocks=1 00:23:08.069 --rc geninfo_unexecuted_blocks=1 00:23:08.069 00:23:08.069 ' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:08.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.069 --rc genhtml_branch_coverage=1 00:23:08.069 --rc genhtml_function_coverage=1 00:23:08.069 --rc genhtml_legend=1 00:23:08.069 --rc geninfo_all_blocks=1 00:23:08.069 --rc geninfo_unexecuted_blocks=1 00:23:08.069 00:23:08.069 ' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:08.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.069 ************************************ 00:23:08.069 START TEST nvmf_multicontroller 00:23:08.069 ************************************ 00:23:08.069 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:08.069 * Looking for test storage... 00:23:08.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:08.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.332 --rc genhtml_branch_coverage=1 00:23:08.332 --rc genhtml_function_coverage=1 00:23:08.332 --rc genhtml_legend=1 00:23:08.332 --rc geninfo_all_blocks=1 00:23:08.332 --rc geninfo_unexecuted_blocks=1 00:23:08.332 00:23:08.332 ' 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:08.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.332 --rc genhtml_branch_coverage=1 00:23:08.332 --rc genhtml_function_coverage=1 00:23:08.332 --rc genhtml_legend=1 00:23:08.332 --rc geninfo_all_blocks=1 00:23:08.332 --rc geninfo_unexecuted_blocks=1 00:23:08.332 00:23:08.332 ' 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:08.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.332 --rc genhtml_branch_coverage=1 00:23:08.332 --rc genhtml_function_coverage=1 00:23:08.332 --rc genhtml_legend=1 00:23:08.332 --rc geninfo_all_blocks=1 00:23:08.332 --rc geninfo_unexecuted_blocks=1 00:23:08.332 00:23:08.332 ' 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:08.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.332 --rc genhtml_branch_coverage=1 00:23:08.332 --rc genhtml_function_coverage=1 00:23:08.332 --rc genhtml_legend=1 00:23:08.332 --rc geninfo_all_blocks=1 00:23:08.332 --rc geninfo_unexecuted_blocks=1 00:23:08.332 00:23:08.332 ' 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.332 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:08.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:08.333 07:06:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.478 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:16.479 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:16.479 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:16.479 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:16.479 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.479 07:06:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:16.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:23:16.479 00:23:16.479 --- 10.0.0.2 ping statistics --- 00:23:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.479 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:23:16.479 00:23:16.479 --- 10.0.0.1 ping statistics --- 00:23:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.479 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=3203320 00:23:16.479 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 3203320 00:23:16.480 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:16.480 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3203320 ']' 00:23:16.480 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.480 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.480 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.480 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.480 07:06:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.480 [2024-10-16 07:06:15.347895] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:23:16.480 [2024-10-16 07:06:15.347966] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.480 [2024-10-16 07:06:15.434603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:16.480 [2024-10-16 07:06:15.485559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.480 [2024-10-16 07:06:15.485610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.480 [2024-10-16 07:06:15.485619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.480 [2024-10-16 07:06:15.485626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.480 [2024-10-16 07:06:15.485632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.480 [2024-10-16 07:06:15.487765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.480 [2024-10-16 07:06:15.487917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.480 [2024-10-16 07:06:15.487942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.741 [2024-10-16 07:06:16.198467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.741 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 Malloc0 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 [2024-10-16 07:06:16.275384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 [2024-10-16 07:06:16.287275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 Malloc1 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3203600 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3203600 /var/tmp/bdevperf.sock 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3203600 ']' 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.002 07:06:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.945 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.945 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:17.945 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:17.945 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.945 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.206 NVMe0n1 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.206 1 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.206 request: 00:23:18.206 { 00:23:18.206 "name": "NVMe0", 00:23:18.206 "trtype": "tcp", 00:23:18.206 "traddr": "10.0.0.2", 00:23:18.206 "adrfam": "ipv4", 00:23:18.206 "trsvcid": "4420", 00:23:18.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.206 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:18.206 "hostaddr": "10.0.0.1", 00:23:18.206 "prchk_reftag": false, 00:23:18.206 "prchk_guard": false, 00:23:18.206 "hdgst": false, 00:23:18.206 "ddgst": false, 00:23:18.206 "allow_unrecognized_csi": false, 00:23:18.206 "method": "bdev_nvme_attach_controller", 00:23:18.206 "req_id": 1 00:23:18.206 } 00:23:18.206 Got JSON-RPC error response 00:23:18.206 response: 00:23:18.206 { 00:23:18.206 "code": -114, 00:23:18.206 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:18.206 } 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.206 request: 00:23:18.206 { 00:23:18.206 "name": "NVMe0", 00:23:18.206 "trtype": "tcp", 00:23:18.206 "traddr": "10.0.0.2", 00:23:18.206 "adrfam": "ipv4", 00:23:18.206 "trsvcid": "4420", 00:23:18.206 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:18.206 "hostaddr": "10.0.0.1", 00:23:18.206 "prchk_reftag": false, 00:23:18.206 "prchk_guard": false, 00:23:18.206 "hdgst": false, 00:23:18.206 "ddgst": false, 00:23:18.206 "allow_unrecognized_csi": false, 00:23:18.206 "method": "bdev_nvme_attach_controller", 00:23:18.206 "req_id": 1 00:23:18.206 } 00:23:18.206 Got JSON-RPC error response 00:23:18.206 response: 00:23:18.206 { 00:23:18.206 "code": -114, 00:23:18.206 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:18.206 } 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.206 request: 00:23:18.206 { 00:23:18.206 "name": "NVMe0", 00:23:18.206 "trtype": "tcp", 00:23:18.206 "traddr": "10.0.0.2", 00:23:18.206 "adrfam": "ipv4", 00:23:18.206 "trsvcid": "4420", 00:23:18.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.206 "hostaddr": "10.0.0.1", 00:23:18.206 "prchk_reftag": false, 00:23:18.206 "prchk_guard": false, 00:23:18.206 "hdgst": false, 00:23:18.206 "ddgst": false, 00:23:18.206 "multipath": "disable", 00:23:18.206 "allow_unrecognized_csi": false, 00:23:18.206 "method": "bdev_nvme_attach_controller", 00:23:18.206 "req_id": 1 00:23:18.206 } 00:23:18.206 Got JSON-RPC error response 00:23:18.206 response: 00:23:18.206 { 00:23:18.206 "code": -114, 00:23:18.206 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:18.206 } 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:18.206 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.207 request: 00:23:18.207 { 00:23:18.207 "name": "NVMe0", 00:23:18.207 "trtype": "tcp", 00:23:18.207 "traddr": "10.0.0.2", 00:23:18.207 "adrfam": "ipv4", 00:23:18.207 "trsvcid": "4420", 00:23:18.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.207 "hostaddr": "10.0.0.1", 00:23:18.207 "prchk_reftag": false, 00:23:18.207 "prchk_guard": false, 00:23:18.207 "hdgst": false, 00:23:18.207 "ddgst": false, 00:23:18.207 "multipath": "failover", 00:23:18.207 "allow_unrecognized_csi": false, 00:23:18.207 "method": "bdev_nvme_attach_controller", 00:23:18.207 "req_id": 1 00:23:18.207 } 00:23:18.207 Got JSON-RPC error response 00:23:18.207 response: 00:23:18.207 { 00:23:18.207 "code": -114, 00:23:18.207 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:18.207 } 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.207 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.467 NVMe0n1 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.468 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.468 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.728 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:18.728 07:06:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.668 { 00:23:19.668 "results": [ 00:23:19.668 { 00:23:19.668 "job": "NVMe0n1", 00:23:19.668 "core_mask": "0x1", 00:23:19.668 "workload": "write", 00:23:19.668 "status": "finished", 00:23:19.668 "queue_depth": 128, 00:23:19.668 "io_size": 4096, 00:23:19.668 "runtime": 1.006329, 00:23:19.668 "iops": 25478.74502275101, 00:23:19.668 "mibps": 99.52634774512113, 00:23:19.668 "io_failed": 0, 00:23:19.668 "io_timeout": 0, 00:23:19.668 "avg_latency_us": 5012.298683307333, 00:23:19.668 "min_latency_us": 2116.266666666667, 00:23:19.668 "max_latency_us": 15073.28 00:23:19.668 } 00:23:19.668 ], 00:23:19.668 "core_count": 1 00:23:19.668 } 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3203600 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3203600 ']' 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3203600 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.668 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3203600 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3203600' 00:23:19.929 killing process with pid 3203600 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3203600 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3203600 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:19.929 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:19.929 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:19.929 [2024-10-16 07:06:16.419901] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:23:19.929 [2024-10-16 07:06:16.419976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3203600 ] 00:23:19.930 [2024-10-16 07:06:16.501642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.930 [2024-10-16 07:06:16.555166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.930 [2024-10-16 07:06:17.943004] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 10abcec8-40bc-42ff-83c6-7c7e6bae9e1b already exists 00:23:19.930 [2024-10-16 07:06:17.943033] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:10abcec8-40bc-42ff-83c6-7c7e6bae9e1b alias for bdev NVMe1n1 00:23:19.930 [2024-10-16 07:06:17.943041] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:19.930 Running I/O for 1 seconds... 00:23:19.930 25434.00 IOPS, 99.35 MiB/s 00:23:19.930 Latency(us) 00:23:19.930 [2024-10-16T05:06:19.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.930 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:19.930 NVMe0n1 : 1.01 25478.75 99.53 0.00 0.00 5012.30 2116.27 15073.28 00:23:19.930 [2024-10-16T05:06:19.429Z] =================================================================================================================== 00:23:19.930 [2024-10-16T05:06:19.429Z] Total : 25478.75 99.53 0.00 0.00 5012.30 2116.27 15073.28 00:23:19.930 Received shutdown signal, test time was about 1.000000 seconds 00:23:19.930 00:23:19.930 Latency(us) 00:23:19.930 [2024-10-16T05:06:19.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.930 [2024-10-16T05:06:19.429Z] =================================================================================================================== 00:23:19.930 [2024-10-16T05:06:19.429Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.930 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:19.930 rmmod nvme_tcp 00:23:19.930 rmmod nvme_fabrics 00:23:19.930 rmmod nvme_keyring 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 3203320 ']' 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 3203320 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3203320 ']' 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3203320 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.930 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3203320 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3203320' 00:23:20.191 killing process with pid 3203320 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3203320 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3203320 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.191 07:06:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.739 00:23:22.739 real 0m14.198s 00:23:22.739 user 0m17.779s 00:23:22.739 sys 0m6.526s 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.739 ************************************ 00:23:22.739 END TEST nvmf_multicontroller 00:23:22.739 ************************************ 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.739 ************************************ 00:23:22.739 START TEST nvmf_aer 00:23:22.739 ************************************ 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:22.739 * Looking for test storage... 00:23:22.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:22.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.739 --rc genhtml_branch_coverage=1 00:23:22.739 --rc genhtml_function_coverage=1 00:23:22.739 --rc genhtml_legend=1 00:23:22.739 --rc geninfo_all_blocks=1 00:23:22.739 --rc geninfo_unexecuted_blocks=1 00:23:22.739 00:23:22.739 ' 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:22.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.739 --rc genhtml_branch_coverage=1 00:23:22.739 --rc genhtml_function_coverage=1 00:23:22.739 --rc genhtml_legend=1 00:23:22.739 --rc geninfo_all_blocks=1 00:23:22.739 --rc geninfo_unexecuted_blocks=1 00:23:22.739 00:23:22.739 ' 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:22.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.739 --rc genhtml_branch_coverage=1 00:23:22.739 --rc genhtml_function_coverage=1 00:23:22.739 --rc genhtml_legend=1 00:23:22.739 --rc geninfo_all_blocks=1 00:23:22.739 --rc geninfo_unexecuted_blocks=1 00:23:22.739 00:23:22.739 ' 00:23:22.739 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:22.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.739 --rc genhtml_branch_coverage=1 00:23:22.740 --rc genhtml_function_coverage=1 00:23:22.740 --rc genhtml_legend=1 00:23:22.740 --rc geninfo_all_blocks=1 00:23:22.740 --rc geninfo_unexecuted_blocks=1 00:23:22.740 00:23:22.740 ' 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:22.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.740 07:06:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:30.884 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:30.885 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:30.885 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:30.885 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:30.885 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:30.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:23:30.885 00:23:30.885 --- 10.0.0.2 ping statistics --- 00:23:30.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.885 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:23:30.885 00:23:30.885 --- 10.0.0.1 ping statistics --- 00:23:30.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.885 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=3208366 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 3208366 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3208366 ']' 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.885 07:06:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.885 [2024-10-16 07:06:29.558631] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:23:30.885 [2024-10-16 07:06:29.558698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.885 [2024-10-16 07:06:29.648287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.885 [2024-10-16 07:06:29.701696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.885 [2024-10-16 07:06:29.701746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.885 [2024-10-16 07:06:29.701754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.886 [2024-10-16 07:06:29.701761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.886 [2024-10-16 07:06:29.701768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.886 [2024-10-16 07:06:29.703880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.886 [2024-10-16 07:06:29.703982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.886 [2024-10-16 07:06:29.704292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.886 [2024-10-16 07:06:29.704392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.886 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.886 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:30.886 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:30.886 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:30.886 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.147 [2024-10-16 07:06:30.428143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.147 Malloc0 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.147 [2024-10-16 07:06:30.505921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.147 [ 00:23:31.147 { 00:23:31.147 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:31.147 "subtype": "Discovery", 00:23:31.147 "listen_addresses": [], 00:23:31.147 "allow_any_host": true, 00:23:31.147 "hosts": [] 00:23:31.147 }, 00:23:31.147 { 00:23:31.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.147 "subtype": "NVMe", 00:23:31.147 "listen_addresses": [ 00:23:31.147 { 00:23:31.147 "trtype": "TCP", 00:23:31.147 "adrfam": "IPv4", 00:23:31.147 "traddr": "10.0.0.2", 00:23:31.147 "trsvcid": "4420" 00:23:31.147 } 00:23:31.147 ], 00:23:31.147 "allow_any_host": true, 00:23:31.147 "hosts": [], 00:23:31.147 "serial_number": "SPDK00000000000001", 00:23:31.147 "model_number": "SPDK bdev Controller", 00:23:31.147 "max_namespaces": 2, 00:23:31.147 "min_cntlid": 1, 00:23:31.147 "max_cntlid": 65519, 00:23:31.147 "namespaces": [ 00:23:31.147 { 00:23:31.147 "nsid": 1, 00:23:31.147 "bdev_name": "Malloc0", 00:23:31.147 "name": "Malloc0", 00:23:31.147 "nguid": "364E26EA908441769496217F6592A548", 00:23:31.147 "uuid": "364e26ea-9084-4176-9496-217f6592a548" 00:23:31.147 } 00:23:31.147 ] 00:23:31.147 } 00:23:31.147 ] 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:31.147 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3208523 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:31.148 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:31.408 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.409 Malloc1 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.409 Asynchronous Event Request test 00:23:31.409 Attaching to 10.0.0.2 00:23:31.409 Attached to 10.0.0.2 00:23:31.409 Registering asynchronous event callbacks... 00:23:31.409 Starting namespace attribute notice tests for all controllers... 00:23:31.409 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:31.409 aer_cb - Changed Namespace 00:23:31.409 Cleaning up... 00:23:31.409 [ 00:23:31.409 { 00:23:31.409 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:31.409 "subtype": "Discovery", 00:23:31.409 "listen_addresses": [], 00:23:31.409 "allow_any_host": true, 00:23:31.409 "hosts": [] 00:23:31.409 }, 00:23:31.409 { 00:23:31.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.409 "subtype": "NVMe", 00:23:31.409 "listen_addresses": [ 00:23:31.409 { 00:23:31.409 "trtype": "TCP", 00:23:31.409 "adrfam": "IPv4", 00:23:31.409 "traddr": "10.0.0.2", 00:23:31.409 "trsvcid": "4420" 00:23:31.409 } 00:23:31.409 ], 00:23:31.409 "allow_any_host": true, 00:23:31.409 "hosts": [], 00:23:31.409 "serial_number": "SPDK00000000000001", 00:23:31.409 "model_number": "SPDK bdev Controller", 00:23:31.409 "max_namespaces": 2, 00:23:31.409 "min_cntlid": 1, 00:23:31.409 "max_cntlid": 65519, 00:23:31.409 "namespaces": [ 00:23:31.409 { 00:23:31.409 "nsid": 1, 00:23:31.409 "bdev_name": "Malloc0", 00:23:31.409 "name": "Malloc0", 00:23:31.409 "nguid": "364E26EA908441769496217F6592A548", 00:23:31.409 "uuid": "364e26ea-9084-4176-9496-217f6592a548" 00:23:31.409 }, 00:23:31.409 { 00:23:31.409 "nsid": 2, 00:23:31.409 "bdev_name": "Malloc1", 00:23:31.409 "name": "Malloc1", 00:23:31.409 "nguid": "AED7907F3E104F17A98C64DAA039E2E0", 00:23:31.409 "uuid": "aed7907f-3e10-4f17-a98c-64daa039e2e0" 00:23:31.409 } 00:23:31.409 ] 00:23:31.409 } 00:23:31.409 ] 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3208523 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.409 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.409 rmmod nvme_tcp 00:23:31.670 rmmod nvme_fabrics 00:23:31.670 rmmod nvme_keyring 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 3208366 ']' 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 3208366 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3208366 ']' 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3208366 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.670 07:06:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3208366 00:23:31.670 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:31.670 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:31.670 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3208366' 00:23:31.670 killing process with pid 3208366 00:23:31.670 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3208366 00:23:31.670 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3208366 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.930 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.931 07:06:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.844 07:06:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.844 00:23:33.844 real 0m11.534s 00:23:33.844 user 0m8.153s 00:23:33.844 sys 0m6.251s 00:23:33.844 07:06:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:33.844 07:06:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.844 ************************************ 00:23:33.844 END TEST nvmf_aer 00:23:33.844 ************************************ 00:23:33.844 07:06:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:33.844 07:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:33.844 07:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:33.844 07:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.105 ************************************ 00:23:34.105 START TEST nvmf_async_init 00:23:34.105 ************************************ 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:34.105 * Looking for test storage... 00:23:34.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:34.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.105 --rc genhtml_branch_coverage=1 00:23:34.105 --rc genhtml_function_coverage=1 00:23:34.105 --rc genhtml_legend=1 00:23:34.105 --rc geninfo_all_blocks=1 00:23:34.105 --rc geninfo_unexecuted_blocks=1 00:23:34.105 00:23:34.105 ' 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:34.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.105 --rc genhtml_branch_coverage=1 00:23:34.105 --rc genhtml_function_coverage=1 00:23:34.105 --rc genhtml_legend=1 00:23:34.105 --rc geninfo_all_blocks=1 00:23:34.105 --rc geninfo_unexecuted_blocks=1 00:23:34.105 00:23:34.105 ' 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:34.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.105 --rc genhtml_branch_coverage=1 00:23:34.105 --rc genhtml_function_coverage=1 00:23:34.105 --rc genhtml_legend=1 00:23:34.105 --rc geninfo_all_blocks=1 00:23:34.105 --rc geninfo_unexecuted_blocks=1 00:23:34.105 00:23:34.105 ' 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:34.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.105 --rc genhtml_branch_coverage=1 00:23:34.105 --rc genhtml_function_coverage=1 00:23:34.105 --rc genhtml_legend=1 00:23:34.105 --rc geninfo_all_blocks=1 00:23:34.105 --rc geninfo_unexecuted_blocks=1 00:23:34.105 00:23:34.105 ' 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.105 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:34.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:34.106 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1e82ac4f4d9e4e0b86ae3ca48cafbf31 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:34.367 07:06:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.509 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:42.510 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:42.510 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:42.510 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:42.510 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.510 07:06:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:23:42.510 00:23:42.510 --- 10.0.0.2 ping statistics --- 00:23:42.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.510 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:23:42.510 00:23:42.510 --- 10.0.0.1 ping statistics --- 00:23:42.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.510 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=3212738 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 3212738 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3212738 ']' 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.510 [2024-10-16 07:06:41.167699] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:23:42.510 [2024-10-16 07:06:41.167771] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.510 [2024-10-16 07:06:41.257549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.510 [2024-10-16 07:06:41.308022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.510 [2024-10-16 07:06:41.308076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.510 [2024-10-16 07:06:41.308085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.510 [2024-10-16 07:06:41.308092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.510 [2024-10-16 07:06:41.308099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.510 [2024-10-16 07:06:41.308882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.510 07:06:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.772 [2024-10-16 07:06:42.038710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.772 null0 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1e82ac4f4d9e4e0b86ae3ca48cafbf31 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.772 [2024-10-16 07:06:42.099095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.772 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.033 nvme0n1 00:23:43.033 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.033 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:43.033 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.033 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.033 [ 00:23:43.033 { 00:23:43.033 "name": "nvme0n1", 00:23:43.033 "aliases": [ 00:23:43.033 "1e82ac4f-4d9e-4e0b-86ae-3ca48cafbf31" 00:23:43.033 ], 00:23:43.033 "product_name": "NVMe disk", 00:23:43.033 "block_size": 512, 00:23:43.033 "num_blocks": 2097152, 00:23:43.033 "uuid": "1e82ac4f-4d9e-4e0b-86ae-3ca48cafbf31", 00:23:43.033 "numa_id": 0, 00:23:43.033 "assigned_rate_limits": { 00:23:43.033 "rw_ios_per_sec": 0, 00:23:43.033 "rw_mbytes_per_sec": 0, 00:23:43.033 "r_mbytes_per_sec": 0, 00:23:43.033 "w_mbytes_per_sec": 0 00:23:43.033 }, 00:23:43.033 "claimed": false, 00:23:43.033 "zoned": false, 00:23:43.033 "supported_io_types": { 00:23:43.033 "read": true, 00:23:43.033 "write": true, 00:23:43.033 "unmap": false, 00:23:43.033 "flush": true, 00:23:43.033 "reset": true, 00:23:43.033 "nvme_admin": true, 00:23:43.033 "nvme_io": true, 00:23:43.033 "nvme_io_md": false, 00:23:43.033 "write_zeroes": true, 00:23:43.033 "zcopy": false, 00:23:43.033 "get_zone_info": false, 00:23:43.033 "zone_management": false, 00:23:43.033 "zone_append": false, 00:23:43.033 "compare": true, 00:23:43.033 "compare_and_write": true, 00:23:43.033 "abort": true, 00:23:43.033 "seek_hole": false, 00:23:43.033 "seek_data": false, 00:23:43.033 "copy": true, 00:23:43.033 "nvme_iov_md": false 00:23:43.033 }, 00:23:43.033 "memory_domains": [ 00:23:43.033 { 00:23:43.033 "dma_device_id": "system", 00:23:43.033 "dma_device_type": 1 00:23:43.033 } 00:23:43.033 ], 00:23:43.033 "driver_specific": { 00:23:43.033 "nvme": [ 00:23:43.033 { 00:23:43.033 "trid": { 00:23:43.033 "trtype": "TCP", 00:23:43.033 "adrfam": "IPv4", 00:23:43.033 "traddr": "10.0.0.2", 00:23:43.033 "trsvcid": "4420", 00:23:43.033 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:43.033 }, 00:23:43.033 "ctrlr_data": { 00:23:43.033 "cntlid": 1, 00:23:43.033 "vendor_id": "0x8086", 00:23:43.033 "model_number": "SPDK bdev Controller", 00:23:43.033 "serial_number": "00000000000000000000", 00:23:43.033 "firmware_revision": "25.01", 00:23:43.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.033 "oacs": { 00:23:43.033 "security": 0, 00:23:43.033 "format": 0, 00:23:43.033 "firmware": 0, 00:23:43.033 "ns_manage": 0 00:23:43.033 }, 00:23:43.033 "multi_ctrlr": true, 00:23:43.033 "ana_reporting": false 00:23:43.033 }, 00:23:43.033 "vs": { 00:23:43.033 "nvme_version": "1.3" 00:23:43.033 }, 00:23:43.033 "ns_data": { 00:23:43.033 "id": 1, 00:23:43.034 "can_share": true 00:23:43.034 } 00:23:43.034 } 00:23:43.034 ], 00:23:43.034 "mp_policy": "active_passive" 00:23:43.034 } 00:23:43.034 } 00:23:43.034 ] 00:23:43.034 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.034 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:43.034 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.034 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.034 [2024-10-16 07:06:42.376672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:43.034 [2024-10-16 07:06:42.376765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b6700 (9): Bad file descriptor 00:23:43.034 [2024-10-16 07:06:42.508953] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:43.034 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.034 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:43.034 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.034 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.034 [ 00:23:43.034 { 00:23:43.034 "name": "nvme0n1", 00:23:43.034 "aliases": [ 00:23:43.034 "1e82ac4f-4d9e-4e0b-86ae-3ca48cafbf31" 00:23:43.034 ], 00:23:43.034 "product_name": "NVMe disk", 00:23:43.034 "block_size": 512, 00:23:43.034 "num_blocks": 2097152, 00:23:43.034 "uuid": "1e82ac4f-4d9e-4e0b-86ae-3ca48cafbf31", 00:23:43.034 "numa_id": 0, 00:23:43.034 "assigned_rate_limits": { 00:23:43.034 "rw_ios_per_sec": 0, 00:23:43.034 "rw_mbytes_per_sec": 0, 00:23:43.034 "r_mbytes_per_sec": 0, 00:23:43.034 "w_mbytes_per_sec": 0 00:23:43.034 }, 00:23:43.034 "claimed": false, 00:23:43.034 "zoned": false, 00:23:43.034 "supported_io_types": { 00:23:43.034 "read": true, 00:23:43.034 "write": true, 00:23:43.034 "unmap": false, 00:23:43.034 "flush": true, 00:23:43.034 "reset": true, 00:23:43.034 "nvme_admin": true, 00:23:43.034 "nvme_io": true, 00:23:43.034 "nvme_io_md": false, 00:23:43.034 "write_zeroes": true, 00:23:43.034 "zcopy": false, 00:23:43.034 "get_zone_info": false, 00:23:43.034 "zone_management": false, 00:23:43.034 "zone_append": false, 00:23:43.034 "compare": true, 00:23:43.034 "compare_and_write": true, 00:23:43.034 "abort": true, 00:23:43.034 "seek_hole": false, 00:23:43.034 "seek_data": false, 00:23:43.034 "copy": true, 00:23:43.034 "nvme_iov_md": false 00:23:43.034 }, 00:23:43.034 "memory_domains": [ 00:23:43.034 { 00:23:43.034 "dma_device_id": "system", 00:23:43.034 "dma_device_type": 1 00:23:43.034 } 00:23:43.034 ], 00:23:43.034 "driver_specific": { 00:23:43.034 "nvme": [ 00:23:43.034 { 00:23:43.034 "trid": { 00:23:43.034 "trtype": "TCP", 00:23:43.034 "adrfam": "IPv4", 00:23:43.034 "traddr": "10.0.0.2", 00:23:43.034 "trsvcid": "4420", 00:23:43.034 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:43.034 }, 00:23:43.034 "ctrlr_data": { 00:23:43.034 "cntlid": 2, 00:23:43.034 "vendor_id": "0x8086", 00:23:43.034 "model_number": "SPDK bdev Controller", 00:23:43.034 "serial_number": "00000000000000000000", 00:23:43.034 "firmware_revision": "25.01", 00:23:43.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.034 "oacs": { 00:23:43.034 "security": 0, 00:23:43.034 "format": 0, 00:23:43.034 "firmware": 0, 00:23:43.034 "ns_manage": 0 00:23:43.034 }, 00:23:43.034 "multi_ctrlr": true, 00:23:43.034 "ana_reporting": false 00:23:43.034 }, 00:23:43.034 "vs": { 00:23:43.295 "nvme_version": "1.3" 00:23:43.295 }, 00:23:43.295 "ns_data": { 00:23:43.295 "id": 1, 00:23:43.295 "can_share": true 00:23:43.295 } 00:23:43.295 } 00:23:43.295 ], 00:23:43.295 "mp_policy": "active_passive" 00:23:43.295 } 00:23:43.295 } 00:23:43.295 ] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.yc6Cv3aWsU 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.yc6Cv3aWsU 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.yc6Cv3aWsU 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.295 [2024-10-16 07:06:42.597361] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.295 [2024-10-16 07:06:42.597525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.295 [2024-10-16 07:06:42.621442] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.295 nvme0n1 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.295 [ 00:23:43.295 { 00:23:43.295 "name": "nvme0n1", 00:23:43.295 "aliases": [ 00:23:43.295 "1e82ac4f-4d9e-4e0b-86ae-3ca48cafbf31" 00:23:43.295 ], 00:23:43.295 "product_name": "NVMe disk", 00:23:43.295 "block_size": 512, 00:23:43.295 "num_blocks": 2097152, 00:23:43.295 "uuid": "1e82ac4f-4d9e-4e0b-86ae-3ca48cafbf31", 00:23:43.295 "numa_id": 0, 00:23:43.295 "assigned_rate_limits": { 00:23:43.295 "rw_ios_per_sec": 0, 00:23:43.295 "rw_mbytes_per_sec": 0, 00:23:43.295 "r_mbytes_per_sec": 0, 00:23:43.295 "w_mbytes_per_sec": 0 00:23:43.295 }, 00:23:43.295 "claimed": false, 00:23:43.295 "zoned": false, 00:23:43.295 "supported_io_types": { 00:23:43.295 "read": true, 00:23:43.295 "write": true, 00:23:43.295 "unmap": false, 00:23:43.295 "flush": true, 00:23:43.295 "reset": true, 00:23:43.295 "nvme_admin": true, 00:23:43.295 "nvme_io": true, 00:23:43.295 "nvme_io_md": false, 00:23:43.295 "write_zeroes": true, 00:23:43.295 "zcopy": false, 00:23:43.295 "get_zone_info": false, 00:23:43.295 "zone_management": false, 00:23:43.295 "zone_append": false, 00:23:43.295 "compare": true, 00:23:43.295 "compare_and_write": true, 00:23:43.295 "abort": true, 00:23:43.295 "seek_hole": false, 00:23:43.295 "seek_data": false, 00:23:43.295 "copy": true, 00:23:43.295 "nvme_iov_md": false 00:23:43.295 }, 00:23:43.295 "memory_domains": [ 00:23:43.295 { 00:23:43.295 "dma_device_id": "system", 00:23:43.295 "dma_device_type": 1 00:23:43.295 } 00:23:43.295 ], 00:23:43.295 "driver_specific": { 00:23:43.295 "nvme": [ 00:23:43.295 { 00:23:43.295 "trid": { 00:23:43.295 "trtype": "TCP", 00:23:43.295 "adrfam": "IPv4", 00:23:43.295 "traddr": "10.0.0.2", 00:23:43.295 "trsvcid": "4421", 00:23:43.295 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:43.295 }, 00:23:43.295 "ctrlr_data": { 00:23:43.295 "cntlid": 3, 00:23:43.295 "vendor_id": "0x8086", 00:23:43.295 "model_number": "SPDK bdev Controller", 00:23:43.295 "serial_number": "00000000000000000000", 00:23:43.295 "firmware_revision": "25.01", 00:23:43.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.295 "oacs": { 00:23:43.295 "security": 0, 00:23:43.295 "format": 0, 00:23:43.295 "firmware": 0, 00:23:43.295 "ns_manage": 0 00:23:43.295 }, 00:23:43.295 "multi_ctrlr": true, 00:23:43.295 "ana_reporting": false 00:23:43.295 }, 00:23:43.295 "vs": { 00:23:43.295 "nvme_version": "1.3" 00:23:43.295 }, 00:23:43.295 "ns_data": { 00:23:43.295 "id": 1, 00:23:43.295 "can_share": true 00:23:43.295 } 00:23:43.295 } 00:23:43.295 ], 00:23:43.295 "mp_policy": "active_passive" 00:23:43.295 } 00:23:43.295 } 00:23:43.295 ] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.295 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.yc6Cv3aWsU 00:23:43.296 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:43.296 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:43.296 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:43.296 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:43.296 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.296 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:43.296 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.296 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.296 rmmod nvme_tcp 00:23:43.296 rmmod nvme_fabrics 00:23:43.296 rmmod nvme_keyring 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 3212738 ']' 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 3212738 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3212738 ']' 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3212738 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3212738 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3212738' 00:23:43.556 killing process with pid 3212738 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3212738 00:23:43.556 07:06:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3212738 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.556 07:06:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.103 00:23:46.103 real 0m11.754s 00:23:46.103 user 0m4.308s 00:23:46.103 sys 0m6.010s 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:46.103 ************************************ 00:23:46.103 END TEST nvmf_async_init 00:23:46.103 ************************************ 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.103 ************************************ 00:23:46.103 START TEST dma 00:23:46.103 ************************************ 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:46.103 * Looking for test storage... 00:23:46.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:46.103 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:46.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.104 --rc genhtml_branch_coverage=1 00:23:46.104 --rc genhtml_function_coverage=1 00:23:46.104 --rc genhtml_legend=1 00:23:46.104 --rc geninfo_all_blocks=1 00:23:46.104 --rc geninfo_unexecuted_blocks=1 00:23:46.104 00:23:46.104 ' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:46.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.104 --rc genhtml_branch_coverage=1 00:23:46.104 --rc genhtml_function_coverage=1 00:23:46.104 --rc genhtml_legend=1 00:23:46.104 --rc geninfo_all_blocks=1 00:23:46.104 --rc geninfo_unexecuted_blocks=1 00:23:46.104 00:23:46.104 ' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:46.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.104 --rc genhtml_branch_coverage=1 00:23:46.104 --rc genhtml_function_coverage=1 00:23:46.104 --rc genhtml_legend=1 00:23:46.104 --rc geninfo_all_blocks=1 00:23:46.104 --rc geninfo_unexecuted_blocks=1 00:23:46.104 00:23:46.104 ' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:46.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.104 --rc genhtml_branch_coverage=1 00:23:46.104 --rc genhtml_function_coverage=1 00:23:46.104 --rc genhtml_legend=1 00:23:46.104 --rc geninfo_all_blocks=1 00:23:46.104 --rc geninfo_unexecuted_blocks=1 00:23:46.104 00:23:46.104 ' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:46.104 00:23:46.104 real 0m0.242s 00:23:46.104 user 0m0.135s 00:23:46.104 sys 0m0.122s 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:46.104 ************************************ 00:23:46.104 END TEST dma 00:23:46.104 ************************************ 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.104 ************************************ 00:23:46.104 START TEST nvmf_identify 00:23:46.104 ************************************ 00:23:46.104 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:46.398 * Looking for test storage... 00:23:46.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.398 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:46.398 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:46.398 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:46.398 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:46.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.399 --rc genhtml_branch_coverage=1 00:23:46.399 --rc genhtml_function_coverage=1 00:23:46.399 --rc genhtml_legend=1 00:23:46.399 --rc geninfo_all_blocks=1 00:23:46.399 --rc geninfo_unexecuted_blocks=1 00:23:46.399 00:23:46.399 ' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:46.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.399 --rc genhtml_branch_coverage=1 00:23:46.399 --rc genhtml_function_coverage=1 00:23:46.399 --rc genhtml_legend=1 00:23:46.399 --rc geninfo_all_blocks=1 00:23:46.399 --rc geninfo_unexecuted_blocks=1 00:23:46.399 00:23:46.399 ' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:46.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.399 --rc genhtml_branch_coverage=1 00:23:46.399 --rc genhtml_function_coverage=1 00:23:46.399 --rc genhtml_legend=1 00:23:46.399 --rc geninfo_all_blocks=1 00:23:46.399 --rc geninfo_unexecuted_blocks=1 00:23:46.399 00:23:46.399 ' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:46.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.399 --rc genhtml_branch_coverage=1 00:23:46.399 --rc genhtml_function_coverage=1 00:23:46.399 --rc genhtml_legend=1 00:23:46.399 --rc geninfo_all_blocks=1 00:23:46.399 --rc geninfo_unexecuted_blocks=1 00:23:46.399 00:23:46.399 ' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.399 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.400 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.400 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:46.400 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:46.400 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.400 07:06:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:54.633 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:54.633 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.633 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:54.634 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:54.634 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:54.634 07:06:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:54.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:23:54.634 00:23:54.634 --- 10.0.0.2 ping statistics --- 00:23:54.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.634 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:54.634 00:23:54.634 --- 10.0.0.1 ping statistics --- 00:23:54.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.634 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3217461 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3217461 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3217461 ']' 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.634 07:06:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.634 [2024-10-16 07:06:53.409819] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:23:54.634 [2024-10-16 07:06:53.409900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.634 [2024-10-16 07:06:53.498814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:54.634 [2024-10-16 07:06:53.552680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.634 [2024-10-16 07:06:53.552734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.634 [2024-10-16 07:06:53.552743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.634 [2024-10-16 07:06:53.552750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.634 [2024-10-16 07:06:53.552757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.634 [2024-10-16 07:06:53.554871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.634 [2024-10-16 07:06:53.554974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.634 [2024-10-16 07:06:53.555308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.634 [2024-10-16 07:06:53.555421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.896 [2024-10-16 07:06:54.246447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.896 Malloc0 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.896 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.897 [2024-10-16 07:06:54.369087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.897 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.897 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:54.897 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.897 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.897 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.897 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:54.897 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.897 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.178 [ 00:23:55.178 { 00:23:55.178 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:55.178 "subtype": "Discovery", 00:23:55.178 "listen_addresses": [ 00:23:55.178 { 00:23:55.178 "trtype": "TCP", 00:23:55.178 "adrfam": "IPv4", 00:23:55.178 "traddr": "10.0.0.2", 00:23:55.178 "trsvcid": "4420" 00:23:55.178 } 00:23:55.178 ], 00:23:55.178 "allow_any_host": true, 00:23:55.178 "hosts": [] 00:23:55.178 }, 00:23:55.178 { 00:23:55.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.178 "subtype": "NVMe", 00:23:55.178 "listen_addresses": [ 00:23:55.178 { 00:23:55.178 "trtype": "TCP", 00:23:55.178 "adrfam": "IPv4", 00:23:55.178 "traddr": "10.0.0.2", 00:23:55.178 "trsvcid": "4420" 00:23:55.178 } 00:23:55.178 ], 00:23:55.178 "allow_any_host": true, 00:23:55.178 "hosts": [], 00:23:55.178 "serial_number": "SPDK00000000000001", 00:23:55.178 "model_number": "SPDK bdev Controller", 00:23:55.178 "max_namespaces": 32, 00:23:55.178 "min_cntlid": 1, 00:23:55.178 "max_cntlid": 65519, 00:23:55.178 "namespaces": [ 00:23:55.178 { 00:23:55.178 "nsid": 1, 00:23:55.178 "bdev_name": "Malloc0", 00:23:55.178 "name": "Malloc0", 00:23:55.178 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:55.178 "eui64": "ABCDEF0123456789", 00:23:55.178 "uuid": "ce5570f3-a8f0-4c8e-afdc-10042a7c1089" 00:23:55.178 } 00:23:55.178 ] 00:23:55.178 } 00:23:55.178 ] 00:23:55.178 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.178 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:55.178 [2024-10-16 07:06:54.434014] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:23:55.178 [2024-10-16 07:06:54.434061] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217736 ] 00:23:55.178 [2024-10-16 07:06:54.468813] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:55.178 [2024-10-16 07:06:54.472881] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:55.178 [2024-10-16 07:06:54.472891] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:55.178 [2024-10-16 07:06:54.472906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:55.178 [2024-10-16 07:06:54.472915] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:55.178 [2024-10-16 07:06:54.473823] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:55.178 [2024-10-16 07:06:54.473881] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x75d760 0 00:23:55.178 [2024-10-16 07:06:54.487869] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:55.178 [2024-10-16 07:06:54.487887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:55.178 [2024-10-16 07:06:54.487893] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:55.178 [2024-10-16 07:06:54.487896] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:55.178 [2024-10-16 07:06:54.487937] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.178 [2024-10-16 07:06:54.487945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.178 [2024-10-16 07:06:54.487950] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.178 [2024-10-16 07:06:54.487966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:55.178 [2024-10-16 07:06:54.487991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.178 [2024-10-16 07:06:54.495859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.178 [2024-10-16 07:06:54.495870] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.178 [2024-10-16 07:06:54.495875] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.178 [2024-10-16 07:06:54.495880] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.178 [2024-10-16 07:06:54.495895] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.178 [2024-10-16 07:06:54.495905] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:55.178 [2024-10-16 07:06:54.495912] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:55.178 [2024-10-16 07:06:54.495929] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.178 [2024-10-16 07:06:54.495934] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.178 [2024-10-16 07:06:54.495938] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.178 [2024-10-16 07:06:54.495953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.178 [2024-10-16 07:06:54.495969] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.178 [2024-10-16 07:06:54.496167] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.178 [2024-10-16 07:06:54.496176] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.178 [2024-10-16 07:06:54.496180] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.178 [2024-10-16 07:06:54.496184] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.178 [2024-10-16 07:06:54.496190] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:55.178 [2024-10-16 07:06:54.496198] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:55.178 [2024-10-16 07:06:54.496206] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.496211] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.496215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.496221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.179 [2024-10-16 07:06:54.496233] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.179 [2024-10-16 07:06:54.496408] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.179 [2024-10-16 07:06:54.496415] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.179 [2024-10-16 07:06:54.496420] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.496424] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.179 [2024-10-16 07:06:54.496429] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:55.179 [2024-10-16 07:06:54.496438] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.179 [2024-10-16 07:06:54.496446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.496450] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.496454] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.496461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.179 [2024-10-16 07:06:54.496472] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.179 [2024-10-16 07:06:54.496666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.179 [2024-10-16 07:06:54.496673] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.179 [2024-10-16 07:06:54.496677] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.496681] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.179 [2024-10-16 07:06:54.496687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.179 [2024-10-16 07:06:54.496696] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.496701] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.496705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.496712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.179 [2024-10-16 07:06:54.496723] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.179 [2024-10-16 07:06:54.496911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.179 [2024-10-16 07:06:54.496919] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.179 [2024-10-16 07:06:54.496923] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.496927] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.179 [2024-10-16 07:06:54.496932] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:55.179 [2024-10-16 07:06:54.496938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:55.179 [2024-10-16 07:06:54.496947] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.179 [2024-10-16 07:06:54.497053] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:55.179 [2024-10-16 07:06:54.497059] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.179 [2024-10-16 07:06:54.497070] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497074] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497078] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.497085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.179 [2024-10-16 07:06:54.497096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.179 [2024-10-16 07:06:54.497266] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.179 [2024-10-16 07:06:54.497273] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.179 [2024-10-16 07:06:54.497278] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497281] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.179 [2024-10-16 07:06:54.497287] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.179 [2024-10-16 07:06:54.497297] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497302] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497305] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.497312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.179 [2024-10-16 07:06:54.497324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.179 [2024-10-16 07:06:54.497502] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.179 [2024-10-16 07:06:54.497509] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.179 [2024-10-16 07:06:54.497513] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.179 [2024-10-16 07:06:54.497523] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.179 [2024-10-16 07:06:54.497528] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:55.179 [2024-10-16 07:06:54.497536] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:55.179 [2024-10-16 07:06:54.497545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.179 [2024-10-16 07:06:54.497559] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.497571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.179 [2024-10-16 07:06:54.497583] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.179 [2024-10-16 07:06:54.497803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.179 [2024-10-16 07:06:54.497810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.179 [2024-10-16 07:06:54.497814] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497819] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75d760): datao=0, datal=4096, cccid=0 00:23:55.179 [2024-10-16 07:06:54.497825] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7bd480) on tqpair(0x75d760): expected_datao=0, payload_size=4096 00:23:55.179 [2024-10-16 07:06:54.497830] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497852] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.497859] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.179 [2024-10-16 07:06:54.539018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.179 [2024-10-16 07:06:54.539022] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539026] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.179 [2024-10-16 07:06:54.539036] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:55.179 [2024-10-16 07:06:54.539041] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:55.179 [2024-10-16 07:06:54.539046] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:55.179 [2024-10-16 07:06:54.539053] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:55.179 [2024-10-16 07:06:54.539058] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:55.179 [2024-10-16 07:06:54.539063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:55.179 [2024-10-16 07:06:54.539072] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.179 [2024-10-16 07:06:54.539080] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539084] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539088] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.539095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.179 [2024-10-16 07:06:54.539108] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.179 [2024-10-16 07:06:54.539278] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.179 [2024-10-16 07:06:54.539284] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.179 [2024-10-16 07:06:54.539287] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539291] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.179 [2024-10-16 07:06:54.539301] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539308] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539312] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.539318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.179 [2024-10-16 07:06:54.539325] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539329] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539332] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.539338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.179 [2024-10-16 07:06:54.539344] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539348] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539351] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x75d760) 00:23:55.179 [2024-10-16 07:06:54.539357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.179 [2024-10-16 07:06:54.539363] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.179 [2024-10-16 07:06:54.539367] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.539370] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.180 [2024-10-16 07:06:54.539376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.180 [2024-10-16 07:06:54.539381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.180 [2024-10-16 07:06:54.539393] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.180 [2024-10-16 07:06:54.539400] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.539403] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75d760) 00:23:55.180 [2024-10-16 07:06:54.539410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.180 [2024-10-16 07:06:54.539423] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd480, cid 0, qid 0 00:23:55.180 [2024-10-16 07:06:54.539428] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd600, cid 1, qid 0 00:23:55.180 [2024-10-16 07:06:54.539433] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd780, cid 2, qid 0 00:23:55.180 [2024-10-16 07:06:54.539438] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.180 [2024-10-16 07:06:54.539443] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bda80, cid 4, qid 0 00:23:55.180 [2024-10-16 07:06:54.539688] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.180 [2024-10-16 07:06:54.539694] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.180 [2024-10-16 07:06:54.539698] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.539702] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bda80) on tqpair=0x75d760 00:23:55.180 [2024-10-16 07:06:54.539708] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:55.180 [2024-10-16 07:06:54.539713] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:55.180 [2024-10-16 07:06:54.539725] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.539729] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75d760) 00:23:55.180 [2024-10-16 07:06:54.539738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.180 [2024-10-16 07:06:54.539749] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bda80, cid 4, qid 0 00:23:55.180 [2024-10-16 07:06:54.543872] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.180 [2024-10-16 07:06:54.543883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.180 [2024-10-16 07:06:54.543886] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.543890] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75d760): datao=0, datal=4096, cccid=4 00:23:55.180 [2024-10-16 07:06:54.543895] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7bda80) on tqpair(0x75d760): expected_datao=0, payload_size=4096 00:23:55.180 [2024-10-16 07:06:54.543899] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.543906] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.543910] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.543916] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.180 [2024-10-16 07:06:54.543922] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.180 [2024-10-16 07:06:54.543925] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.543929] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bda80) on tqpair=0x75d760 00:23:55.180 [2024-10-16 07:06:54.543944] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:55.180 [2024-10-16 07:06:54.543980] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.543984] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75d760) 00:23:55.180 [2024-10-16 07:06:54.543991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.180 [2024-10-16 07:06:54.543999] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.544002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.544006] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x75d760) 00:23:55.180 [2024-10-16 07:06:54.544012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.180 [2024-10-16 07:06:54.544027] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bda80, cid 4, qid 0 00:23:55.180 [2024-10-16 07:06:54.544032] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bdc00, cid 5, qid 0 00:23:55.180 [2024-10-16 07:06:54.544260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.180 [2024-10-16 07:06:54.544267] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.180 [2024-10-16 07:06:54.544270] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.544274] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75d760): datao=0, datal=1024, cccid=4 00:23:55.180 [2024-10-16 07:06:54.544278] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7bda80) on tqpair(0x75d760): expected_datao=0, payload_size=1024 00:23:55.180 [2024-10-16 07:06:54.544283] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.544289] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.544293] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.544299] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.180 [2024-10-16 07:06:54.544305] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.180 [2024-10-16 07:06:54.544308] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.544312] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bdc00) on tqpair=0x75d760 00:23:55.180 [2024-10-16 07:06:54.586045] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.180 [2024-10-16 07:06:54.586058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.180 [2024-10-16 07:06:54.586061] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bda80) on tqpair=0x75d760 00:23:55.180 [2024-10-16 07:06:54.586082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586086] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75d760) 00:23:55.180 [2024-10-16 07:06:54.586093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.180 [2024-10-16 07:06:54.586110] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bda80, cid 4, qid 0 00:23:55.180 [2024-10-16 07:06:54.586343] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.180 [2024-10-16 07:06:54.586350] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.180 [2024-10-16 07:06:54.586354] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586357] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75d760): datao=0, datal=3072, cccid=4 00:23:55.180 [2024-10-16 07:06:54.586362] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7bda80) on tqpair(0x75d760): expected_datao=0, payload_size=3072 00:23:55.180 [2024-10-16 07:06:54.586366] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586373] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586377] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586528] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.180 [2024-10-16 07:06:54.586535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.180 [2024-10-16 07:06:54.586538] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586542] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bda80) on tqpair=0x75d760 00:23:55.180 [2024-10-16 07:06:54.586551] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586554] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x75d760) 00:23:55.180 [2024-10-16 07:06:54.586561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.180 [2024-10-16 07:06:54.586575] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bda80, cid 4, qid 0 00:23:55.180 [2024-10-16 07:06:54.586794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.180 [2024-10-16 07:06:54.586801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.180 [2024-10-16 07:06:54.586804] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586808] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x75d760): datao=0, datal=8, cccid=4 00:23:55.180 [2024-10-16 07:06:54.586812] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7bda80) on tqpair(0x75d760): expected_datao=0, payload_size=8 00:23:55.180 [2024-10-16 07:06:54.586816] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586823] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.586826] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.631856] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.180 [2024-10-16 07:06:54.631867] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.180 [2024-10-16 07:06:54.631870] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.180 [2024-10-16 07:06:54.631874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bda80) on tqpair=0x75d760 00:23:55.180 ===================================================== 00:23:55.180 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:55.180 ===================================================== 00:23:55.180 Controller Capabilities/Features 00:23:55.180 ================================ 00:23:55.180 Vendor ID: 0000 00:23:55.180 Subsystem Vendor ID: 0000 00:23:55.180 Serial Number: .................... 00:23:55.180 Model Number: ........................................ 00:23:55.180 Firmware Version: 25.01 00:23:55.180 Recommended Arb Burst: 0 00:23:55.180 IEEE OUI Identifier: 00 00 00 00:23:55.180 Multi-path I/O 00:23:55.180 May have multiple subsystem ports: No 00:23:55.180 May have multiple controllers: No 00:23:55.180 Associated with SR-IOV VF: No 00:23:55.180 Max Data Transfer Size: 131072 00:23:55.180 Max Number of Namespaces: 0 00:23:55.180 Max Number of I/O Queues: 1024 00:23:55.180 NVMe Specification Version (VS): 1.3 00:23:55.180 NVMe Specification Version (Identify): 1.3 00:23:55.180 Maximum Queue Entries: 128 00:23:55.180 Contiguous Queues Required: Yes 00:23:55.180 Arbitration Mechanisms Supported 00:23:55.180 Weighted Round Robin: Not Supported 00:23:55.180 Vendor Specific: Not Supported 00:23:55.180 Reset Timeout: 15000 ms 00:23:55.180 Doorbell Stride: 4 bytes 00:23:55.181 NVM Subsystem Reset: Not Supported 00:23:55.181 Command Sets Supported 00:23:55.181 NVM Command Set: Supported 00:23:55.181 Boot Partition: Not Supported 00:23:55.181 Memory Page Size Minimum: 4096 bytes 00:23:55.181 Memory Page Size Maximum: 4096 bytes 00:23:55.181 Persistent Memory Region: Not Supported 00:23:55.181 Optional Asynchronous Events Supported 00:23:55.181 Namespace Attribute Notices: Not Supported 00:23:55.181 Firmware Activation Notices: Not Supported 00:23:55.181 ANA Change Notices: Not Supported 00:23:55.181 PLE Aggregate Log Change Notices: Not Supported 00:23:55.181 LBA Status Info Alert Notices: Not Supported 00:23:55.181 EGE Aggregate Log Change Notices: Not Supported 00:23:55.181 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.181 Zone Descriptor Change Notices: Not Supported 00:23:55.181 Discovery Log Change Notices: Supported 00:23:55.181 Controller Attributes 00:23:55.181 128-bit Host Identifier: Not Supported 00:23:55.181 Non-Operational Permissive Mode: Not Supported 00:23:55.181 NVM Sets: Not Supported 00:23:55.181 Read Recovery Levels: Not Supported 00:23:55.181 Endurance Groups: Not Supported 00:23:55.181 Predictable Latency Mode: Not Supported 00:23:55.181 Traffic Based Keep ALive: Not Supported 00:23:55.181 Namespace Granularity: Not Supported 00:23:55.181 SQ Associations: Not Supported 00:23:55.181 UUID List: Not Supported 00:23:55.181 Multi-Domain Subsystem: Not Supported 00:23:55.181 Fixed Capacity Management: Not Supported 00:23:55.181 Variable Capacity Management: Not Supported 00:23:55.181 Delete Endurance Group: Not Supported 00:23:55.181 Delete NVM Set: Not Supported 00:23:55.181 Extended LBA Formats Supported: Not Supported 00:23:55.181 Flexible Data Placement Supported: Not Supported 00:23:55.181 00:23:55.181 Controller Memory Buffer Support 00:23:55.181 ================================ 00:23:55.181 Supported: No 00:23:55.181 00:23:55.181 Persistent Memory Region Support 00:23:55.181 ================================ 00:23:55.181 Supported: No 00:23:55.181 00:23:55.181 Admin Command Set Attributes 00:23:55.181 ============================ 00:23:55.181 Security Send/Receive: Not Supported 00:23:55.181 Format NVM: Not Supported 00:23:55.181 Firmware Activate/Download: Not Supported 00:23:55.181 Namespace Management: Not Supported 00:23:55.181 Device Self-Test: Not Supported 00:23:55.181 Directives: Not Supported 00:23:55.181 NVMe-MI: Not Supported 00:23:55.181 Virtualization Management: Not Supported 00:23:55.181 Doorbell Buffer Config: Not Supported 00:23:55.181 Get LBA Status Capability: Not Supported 00:23:55.181 Command & Feature Lockdown Capability: Not Supported 00:23:55.181 Abort Command Limit: 1 00:23:55.181 Async Event Request Limit: 4 00:23:55.181 Number of Firmware Slots: N/A 00:23:55.181 Firmware Slot 1 Read-Only: N/A 00:23:55.181 Firmware Activation Without Reset: N/A 00:23:55.181 Multiple Update Detection Support: N/A 00:23:55.181 Firmware Update Granularity: No Information Provided 00:23:55.181 Per-Namespace SMART Log: No 00:23:55.181 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.181 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:55.181 Command Effects Log Page: Not Supported 00:23:55.181 Get Log Page Extended Data: Supported 00:23:55.181 Telemetry Log Pages: Not Supported 00:23:55.181 Persistent Event Log Pages: Not Supported 00:23:55.181 Supported Log Pages Log Page: May Support 00:23:55.181 Commands Supported & Effects Log Page: Not Supported 00:23:55.181 Feature Identifiers & Effects Log Page:May Support 00:23:55.181 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.181 Data Area 4 for Telemetry Log: Not Supported 00:23:55.181 Error Log Page Entries Supported: 128 00:23:55.181 Keep Alive: Not Supported 00:23:55.181 00:23:55.181 NVM Command Set Attributes 00:23:55.181 ========================== 00:23:55.181 Submission Queue Entry Size 00:23:55.181 Max: 1 00:23:55.181 Min: 1 00:23:55.181 Completion Queue Entry Size 00:23:55.181 Max: 1 00:23:55.181 Min: 1 00:23:55.181 Number of Namespaces: 0 00:23:55.181 Compare Command: Not Supported 00:23:55.181 Write Uncorrectable Command: Not Supported 00:23:55.181 Dataset Management Command: Not Supported 00:23:55.181 Write Zeroes Command: Not Supported 00:23:55.181 Set Features Save Field: Not Supported 00:23:55.181 Reservations: Not Supported 00:23:55.181 Timestamp: Not Supported 00:23:55.181 Copy: Not Supported 00:23:55.181 Volatile Write Cache: Not Present 00:23:55.181 Atomic Write Unit (Normal): 1 00:23:55.181 Atomic Write Unit (PFail): 1 00:23:55.181 Atomic Compare & Write Unit: 1 00:23:55.181 Fused Compare & Write: Supported 00:23:55.181 Scatter-Gather List 00:23:55.181 SGL Command Set: Supported 00:23:55.181 SGL Keyed: Supported 00:23:55.181 SGL Bit Bucket Descriptor: Not Supported 00:23:55.181 SGL Metadata Pointer: Not Supported 00:23:55.181 Oversized SGL: Not Supported 00:23:55.181 SGL Metadata Address: Not Supported 00:23:55.181 SGL Offset: Supported 00:23:55.181 Transport SGL Data Block: Not Supported 00:23:55.181 Replay Protected Memory Block: Not Supported 00:23:55.181 00:23:55.181 Firmware Slot Information 00:23:55.181 ========================= 00:23:55.181 Active slot: 0 00:23:55.181 00:23:55.181 00:23:55.181 Error Log 00:23:55.181 ========= 00:23:55.181 00:23:55.181 Active Namespaces 00:23:55.181 ================= 00:23:55.181 Discovery Log Page 00:23:55.181 ================== 00:23:55.181 Generation Counter: 2 00:23:55.181 Number of Records: 2 00:23:55.181 Record Format: 0 00:23:55.181 00:23:55.181 Discovery Log Entry 0 00:23:55.181 ---------------------- 00:23:55.181 Transport Type: 3 (TCP) 00:23:55.181 Address Family: 1 (IPv4) 00:23:55.181 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:55.181 Entry Flags: 00:23:55.181 Duplicate Returned Information: 1 00:23:55.181 Explicit Persistent Connection Support for Discovery: 1 00:23:55.181 Transport Requirements: 00:23:55.181 Secure Channel: Not Required 00:23:55.181 Port ID: 0 (0x0000) 00:23:55.181 Controller ID: 65535 (0xffff) 00:23:55.181 Admin Max SQ Size: 128 00:23:55.181 Transport Service Identifier: 4420 00:23:55.181 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:55.181 Transport Address: 10.0.0.2 00:23:55.181 Discovery Log Entry 1 00:23:55.181 ---------------------- 00:23:55.181 Transport Type: 3 (TCP) 00:23:55.181 Address Family: 1 (IPv4) 00:23:55.181 Subsystem Type: 2 (NVM Subsystem) 00:23:55.181 Entry Flags: 00:23:55.181 Duplicate Returned Information: 0 00:23:55.181 Explicit Persistent Connection Support for Discovery: 0 00:23:55.181 Transport Requirements: 00:23:55.181 Secure Channel: Not Required 00:23:55.181 Port ID: 0 (0x0000) 00:23:55.181 Controller ID: 65535 (0xffff) 00:23:55.181 Admin Max SQ Size: 128 00:23:55.181 Transport Service Identifier: 4420 00:23:55.181 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:55.181 Transport Address: 10.0.0.2 [2024-10-16 07:06:54.631978] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:55.181 [2024-10-16 07:06:54.631992] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd480) on tqpair=0x75d760 00:23:55.181 [2024-10-16 07:06:54.632000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.181 [2024-10-16 07:06:54.632006] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd600) on tqpair=0x75d760 00:23:55.181 [2024-10-16 07:06:54.632010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.181 [2024-10-16 07:06:54.632015] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd780) on tqpair=0x75d760 00:23:55.181 [2024-10-16 07:06:54.632020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.181 [2024-10-16 07:06:54.632025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.181 [2024-10-16 07:06:54.632030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.181 [2024-10-16 07:06:54.632040] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.181 [2024-10-16 07:06:54.632044] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.181 [2024-10-16 07:06:54.632047] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.181 [2024-10-16 07:06:54.632055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.181 [2024-10-16 07:06:54.632070] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.181 [2024-10-16 07:06:54.632258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.181 [2024-10-16 07:06:54.632265] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.181 [2024-10-16 07:06:54.632268] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.181 [2024-10-16 07:06:54.632272] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.181 [2024-10-16 07:06:54.632280] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.181 [2024-10-16 07:06:54.632283] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.181 [2024-10-16 07:06:54.632287] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.181 [2024-10-16 07:06:54.632294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.632307] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.632496] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.632502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.632506] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.632510] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.632515] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:55.182 [2024-10-16 07:06:54.632522] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:55.182 [2024-10-16 07:06:54.632532] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.632536] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.632539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.632546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.632557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.632735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.632747] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.632750] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.632754] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.632765] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.632769] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.632773] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.632779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.632790] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.632866] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.632873] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.632877] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.632881] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.632891] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.632895] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.632898] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.632905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.632916] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.633001] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.633007] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.633010] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633014] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.633024] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633028] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633032] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.633038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.633049] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.633240] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.633246] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.633249] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633253] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.633263] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633267] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633270] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.633277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.633287] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.633482] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.633488] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.633494] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633498] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.633508] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633512] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633515] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.633522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.633532] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.633726] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.633733] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.633736] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633740] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.633750] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633754] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633757] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.633764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.633774] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.633964] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.633971] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.633974] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633978] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.633988] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633992] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.633995] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.634002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.634013] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.634193] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.634200] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.634203] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.634207] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.634217] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.634220] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.634224] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.634231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.634241] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.182 [2024-10-16 07:06:54.634451] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.182 [2024-10-16 07:06:54.634457] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.182 [2024-10-16 07:06:54.634461] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.634467] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.182 [2024-10-16 07:06:54.634478] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.634481] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.182 [2024-10-16 07:06:54.634485] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.182 [2024-10-16 07:06:54.634492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.182 [2024-10-16 07:06:54.634503] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.183 [2024-10-16 07:06:54.634693] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.183 [2024-10-16 07:06:54.634699] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.183 [2024-10-16 07:06:54.634702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.634706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.183 [2024-10-16 07:06:54.634716] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.634720] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.634724] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.183 [2024-10-16 07:06:54.634730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.183 [2024-10-16 07:06:54.634741] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.183 [2024-10-16 07:06:54.634939] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.183 [2024-10-16 07:06:54.634946] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.183 [2024-10-16 07:06:54.634950] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.634953] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.183 [2024-10-16 07:06:54.634963] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.634968] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.634971] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.183 [2024-10-16 07:06:54.634978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.183 [2024-10-16 07:06:54.634988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.183 [2024-10-16 07:06:54.635161] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.183 [2024-10-16 07:06:54.635168] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.183 [2024-10-16 07:06:54.635171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.635175] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.183 [2024-10-16 07:06:54.635185] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.635189] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.635193] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.183 [2024-10-16 07:06:54.635199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.183 [2024-10-16 07:06:54.635210] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.183 [2024-10-16 07:06:54.635404] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.183 [2024-10-16 07:06:54.635410] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.183 [2024-10-16 07:06:54.635414] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.635418] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.183 [2024-10-16 07:06:54.635429] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.635433] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.635437] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.183 [2024-10-16 07:06:54.635444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.183 [2024-10-16 07:06:54.635454] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.183 [2024-10-16 07:06:54.635642] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.183 [2024-10-16 07:06:54.635648] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.183 [2024-10-16 07:06:54.635652] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.635655] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.183 [2024-10-16 07:06:54.635665] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.635669] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.635673] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.183 [2024-10-16 07:06:54.635680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.183 [2024-10-16 07:06:54.635690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.183 [2024-10-16 07:06:54.639855] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.183 [2024-10-16 07:06:54.639863] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.183 [2024-10-16 07:06:54.639867] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.639871] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.183 [2024-10-16 07:06:54.639881] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.639885] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.639889] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x75d760) 00:23:55.183 [2024-10-16 07:06:54.639896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.183 [2024-10-16 07:06:54.639908] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7bd900, cid 3, qid 0 00:23:55.183 [2024-10-16 07:06:54.640084] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.183 [2024-10-16 07:06:54.640090] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.183 [2024-10-16 07:06:54.640094] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.183 [2024-10-16 07:06:54.640098] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7bd900) on tqpair=0x75d760 00:23:55.183 [2024-10-16 07:06:54.640105] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:55.183 00:23:55.183 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:55.468 [2024-10-16 07:06:54.685632] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:23:55.468 [2024-10-16 07:06:54.685682] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217818 ] 00:23:55.468 [2024-10-16 07:06:54.723847] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:55.468 [2024-10-16 07:06:54.723909] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:55.468 [2024-10-16 07:06:54.723914] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:55.468 [2024-10-16 07:06:54.723929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:55.468 [2024-10-16 07:06:54.723939] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:55.468 [2024-10-16 07:06:54.724614] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:55.468 [2024-10-16 07:06:54.724652] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13aa760 0 00:23:55.468 [2024-10-16 07:06:54.730864] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:55.468 [2024-10-16 07:06:54.730879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:55.468 [2024-10-16 07:06:54.730883] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:55.468 [2024-10-16 07:06:54.730887] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:55.468 [2024-10-16 07:06:54.730922] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.468 [2024-10-16 07:06:54.730928] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.468 [2024-10-16 07:06:54.730932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.468 [2024-10-16 07:06:54.730946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:55.468 [2024-10-16 07:06:54.730968] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.468 [2024-10-16 07:06:54.738857] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.468 [2024-10-16 07:06:54.738867] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.468 [2024-10-16 07:06:54.738870] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.468 [2024-10-16 07:06:54.738875] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.468 [2024-10-16 07:06:54.738888] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.468 [2024-10-16 07:06:54.738895] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:55.468 [2024-10-16 07:06:54.738901] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:55.468 [2024-10-16 07:06:54.738915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.468 [2024-10-16 07:06:54.738919] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.738923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.469 [2024-10-16 07:06:54.738932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.469 [2024-10-16 07:06:54.738947] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.469 [2024-10-16 07:06:54.739161] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.469 [2024-10-16 07:06:54.739167] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.469 [2024-10-16 07:06:54.739171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739175] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.469 [2024-10-16 07:06:54.739180] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:55.469 [2024-10-16 07:06:54.739188] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:55.469 [2024-10-16 07:06:54.739195] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739206] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.469 [2024-10-16 07:06:54.739214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.469 [2024-10-16 07:06:54.739225] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.469 [2024-10-16 07:06:54.739412] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.469 [2024-10-16 07:06:54.739418] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.469 [2024-10-16 07:06:54.739422] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739426] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.469 [2024-10-16 07:06:54.739431] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:55.469 [2024-10-16 07:06:54.739439] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.469 [2024-10-16 07:06:54.739446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739450] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739453] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.469 [2024-10-16 07:06:54.739460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.469 [2024-10-16 07:06:54.739471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.469 [2024-10-16 07:06:54.739671] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.469 [2024-10-16 07:06:54.739677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.469 [2024-10-16 07:06:54.739681] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739685] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.469 [2024-10-16 07:06:54.739690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.469 [2024-10-16 07:06:54.739700] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739704] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739708] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.469 [2024-10-16 07:06:54.739715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.469 [2024-10-16 07:06:54.739725] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.469 [2024-10-16 07:06:54.739900] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.469 [2024-10-16 07:06:54.739907] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.469 [2024-10-16 07:06:54.739910] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.739914] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.469 [2024-10-16 07:06:54.739919] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:55.469 [2024-10-16 07:06:54.739924] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:55.469 [2024-10-16 07:06:54.739932] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.469 [2024-10-16 07:06:54.740037] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:55.469 [2024-10-16 07:06:54.740041] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.469 [2024-10-16 07:06:54.740052] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740056] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740060] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.469 [2024-10-16 07:06:54.740067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.469 [2024-10-16 07:06:54.740078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.469 [2024-10-16 07:06:54.740272] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.469 [2024-10-16 07:06:54.740278] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.469 [2024-10-16 07:06:54.740282] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740286] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.469 [2024-10-16 07:06:54.740291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.469 [2024-10-16 07:06:54.740300] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740304] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740308] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.469 [2024-10-16 07:06:54.740315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.469 [2024-10-16 07:06:54.740326] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.469 [2024-10-16 07:06:54.740486] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.469 [2024-10-16 07:06:54.740492] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.469 [2024-10-16 07:06:54.740496] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740500] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.469 [2024-10-16 07:06:54.740504] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.469 [2024-10-16 07:06:54.740509] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:55.469 [2024-10-16 07:06:54.740517] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:55.469 [2024-10-16 07:06:54.740524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.469 [2024-10-16 07:06:54.740534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740538] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.469 [2024-10-16 07:06:54.740545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.469 [2024-10-16 07:06:54.740556] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.469 [2024-10-16 07:06:54.740755] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.469 [2024-10-16 07:06:54.740761] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.469 [2024-10-16 07:06:54.740765] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740769] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13aa760): datao=0, datal=4096, cccid=0 00:23:55.469 [2024-10-16 07:06:54.740774] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a480) on tqpair(0x13aa760): expected_datao=0, payload_size=4096 00:23:55.469 [2024-10-16 07:06:54.740779] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740815] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740820] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740961] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.469 [2024-10-16 07:06:54.740968] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.469 [2024-10-16 07:06:54.740971] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.740975] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.469 [2024-10-16 07:06:54.740983] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:55.469 [2024-10-16 07:06:54.740988] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:55.469 [2024-10-16 07:06:54.740993] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:55.469 [2024-10-16 07:06:54.740997] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:55.469 [2024-10-16 07:06:54.741002] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:55.469 [2024-10-16 07:06:54.741006] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:55.469 [2024-10-16 07:06:54.741015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.469 [2024-10-16 07:06:54.741022] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.741026] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.741030] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.469 [2024-10-16 07:06:54.741037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.469 [2024-10-16 07:06:54.741049] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.469 [2024-10-16 07:06:54.741223] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.469 [2024-10-16 07:06:54.741230] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.469 [2024-10-16 07:06:54.741233] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.469 [2024-10-16 07:06:54.741237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.470 [2024-10-16 07:06:54.741244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13aa760) 00:23:55.470 [2024-10-16 07:06:54.741258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.470 [2024-10-16 07:06:54.741264] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741272] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13aa760) 00:23:55.470 [2024-10-16 07:06:54.741278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.470 [2024-10-16 07:06:54.741284] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741287] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741291] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13aa760) 00:23:55.470 [2024-10-16 07:06:54.741297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.470 [2024-10-16 07:06:54.741303] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741309] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741313] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13aa760) 00:23:55.470 [2024-10-16 07:06:54.741319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.470 [2024-10-16 07:06:54.741324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.741334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.741341] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13aa760) 00:23:55.470 [2024-10-16 07:06:54.741351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.470 [2024-10-16 07:06:54.741364] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a480, cid 0, qid 0 00:23:55.470 [2024-10-16 07:06:54.741370] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a600, cid 1, qid 0 00:23:55.470 [2024-10-16 07:06:54.741375] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a780, cid 2, qid 0 00:23:55.470 [2024-10-16 07:06:54.741379] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a900, cid 3, qid 0 00:23:55.470 [2024-10-16 07:06:54.741384] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140aa80, cid 4, qid 0 00:23:55.470 [2024-10-16 07:06:54.741620] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.470 [2024-10-16 07:06:54.741626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.470 [2024-10-16 07:06:54.741630] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140aa80) on tqpair=0x13aa760 00:23:55.470 [2024-10-16 07:06:54.741639] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:55.470 [2024-10-16 07:06:54.741644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.741655] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.741664] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.741671] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741679] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13aa760) 00:23:55.470 [2024-10-16 07:06:54.741685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.470 [2024-10-16 07:06:54.741696] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140aa80, cid 4, qid 0 00:23:55.470 [2024-10-16 07:06:54.741881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.470 [2024-10-16 07:06:54.741888] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.470 [2024-10-16 07:06:54.741891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741895] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140aa80) on tqpair=0x13aa760 00:23:55.470 [2024-10-16 07:06:54.741961] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.741971] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.741984] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.741988] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13aa760) 00:23:55.470 [2024-10-16 07:06:54.741995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.470 [2024-10-16 07:06:54.742006] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140aa80, cid 4, qid 0 00:23:55.470 [2024-10-16 07:06:54.742251] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.470 [2024-10-16 07:06:54.742257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.470 [2024-10-16 07:06:54.742261] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.742264] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13aa760): datao=0, datal=4096, cccid=4 00:23:55.470 [2024-10-16 07:06:54.742269] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140aa80) on tqpair(0x13aa760): expected_datao=0, payload_size=4096 00:23:55.470 [2024-10-16 07:06:54.742273] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.742280] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.742284] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.742396] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.470 [2024-10-16 07:06:54.742403] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.470 [2024-10-16 07:06:54.742406] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.742410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140aa80) on tqpair=0x13aa760 00:23:55.470 [2024-10-16 07:06:54.742428] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:55.470 [2024-10-16 07:06:54.742437] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.742447] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.742454] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.742458] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13aa760) 00:23:55.470 [2024-10-16 07:06:54.742464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.470 [2024-10-16 07:06:54.742476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140aa80, cid 4, qid 0 00:23:55.470 [2024-10-16 07:06:54.742672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.470 [2024-10-16 07:06:54.742678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.470 [2024-10-16 07:06:54.742682] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.742686] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13aa760): datao=0, datal=4096, cccid=4 00:23:55.470 [2024-10-16 07:06:54.742690] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140aa80) on tqpair(0x13aa760): expected_datao=0, payload_size=4096 00:23:55.470 [2024-10-16 07:06:54.742694] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.742701] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.742705] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.746851] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.470 [2024-10-16 07:06:54.746859] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.470 [2024-10-16 07:06:54.746862] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.746866] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140aa80) on tqpair=0x13aa760 00:23:55.470 [2024-10-16 07:06:54.746891] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.746903] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.746911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.746914] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13aa760) 00:23:55.470 [2024-10-16 07:06:54.746921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.470 [2024-10-16 07:06:54.746934] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140aa80, cid 4, qid 0 00:23:55.470 [2024-10-16 07:06:54.747145] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.470 [2024-10-16 07:06:54.747152] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.470 [2024-10-16 07:06:54.747155] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.747159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13aa760): datao=0, datal=4096, cccid=4 00:23:55.470 [2024-10-16 07:06:54.747164] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140aa80) on tqpair(0x13aa760): expected_datao=0, payload_size=4096 00:23:55.470 [2024-10-16 07:06:54.747168] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.747175] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.747179] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.747325] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.470 [2024-10-16 07:06:54.747331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.470 [2024-10-16 07:06:54.747334] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.470 [2024-10-16 07:06:54.747338] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140aa80) on tqpair=0x13aa760 00:23:55.470 [2024-10-16 07:06:54.747346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.747355] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.747363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.747370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:55.470 [2024-10-16 07:06:54.747376] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:55.471 [2024-10-16 07:06:54.747381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:55.471 [2024-10-16 07:06:54.747387] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:55.471 [2024-10-16 07:06:54.747392] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:55.471 [2024-10-16 07:06:54.747397] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:55.471 [2024-10-16 07:06:54.747414] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.747418] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13aa760) 00:23:55.471 [2024-10-16 07:06:54.747424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.471 [2024-10-16 07:06:54.747433] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.747437] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.747441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13aa760) 00:23:55.471 [2024-10-16 07:06:54.747447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.471 [2024-10-16 07:06:54.747459] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140aa80, cid 4, qid 0 00:23:55.471 [2024-10-16 07:06:54.747464] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140ac00, cid 5, qid 0 00:23:55.471 [2024-10-16 07:06:54.747666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.471 [2024-10-16 07:06:54.747672] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.471 [2024-10-16 07:06:54.747676] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.747679] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140aa80) on tqpair=0x13aa760 00:23:55.471 [2024-10-16 07:06:54.747686] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.471 [2024-10-16 07:06:54.747692] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.471 [2024-10-16 07:06:54.747696] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.747700] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140ac00) on tqpair=0x13aa760 00:23:55.471 [2024-10-16 07:06:54.747709] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.747713] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13aa760) 00:23:55.471 [2024-10-16 07:06:54.747719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.471 [2024-10-16 07:06:54.747730] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140ac00, cid 5, qid 0 00:23:55.471 [2024-10-16 07:06:54.747947] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.471 [2024-10-16 07:06:54.747954] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.471 [2024-10-16 07:06:54.747957] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.747961] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140ac00) on tqpair=0x13aa760 00:23:55.471 [2024-10-16 07:06:54.747970] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.747974] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13aa760) 00:23:55.471 [2024-10-16 07:06:54.747981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.471 [2024-10-16 07:06:54.747991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140ac00, cid 5, qid 0 00:23:55.471 [2024-10-16 07:06:54.748239] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.471 [2024-10-16 07:06:54.748245] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.471 [2024-10-16 07:06:54.748248] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.748252] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140ac00) on tqpair=0x13aa760 00:23:55.471 [2024-10-16 07:06:54.748262] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.748266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13aa760) 00:23:55.471 [2024-10-16 07:06:54.748273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.471 [2024-10-16 07:06:54.748283] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140ac00, cid 5, qid 0 00:23:55.471 [2024-10-16 07:06:54.748457] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.471 [2024-10-16 07:06:54.748464] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.471 [2024-10-16 07:06:54.748469] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.748473] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140ac00) on tqpair=0x13aa760 00:23:55.471 [2024-10-16 07:06:54.748488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.748493] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13aa760) 00:23:55.471 [2024-10-16 07:06:54.748499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.471 [2024-10-16 07:06:54.748507] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.748511] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13aa760) 00:23:55.471 [2024-10-16 07:06:54.748517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.471 [2024-10-16 07:06:54.748524] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.748528] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13aa760) 00:23:55.471 [2024-10-16 07:06:54.748534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.471 [2024-10-16 07:06:54.748544] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.748547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13aa760) 00:23:55.471 [2024-10-16 07:06:54.748554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.471 [2024-10-16 07:06:54.748567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140ac00, cid 5, qid 0 00:23:55.471 [2024-10-16 07:06:54.748572] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140aa80, cid 4, qid 0 00:23:55.471 [2024-10-16 07:06:54.748577] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140ad80, cid 6, qid 0 00:23:55.471 [2024-10-16 07:06:54.748582] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140af00, cid 7, qid 0 00:23:55.471 [2024-10-16 07:06:54.748862] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.471 [2024-10-16 07:06:54.748869] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.471 [2024-10-16 07:06:54.748872] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.748876] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13aa760): datao=0, datal=8192, cccid=5 00:23:55.471 [2024-10-16 07:06:54.748881] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140ac00) on tqpair(0x13aa760): expected_datao=0, payload_size=8192 00:23:55.471 [2024-10-16 07:06:54.748885] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.748996] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749000] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.471 [2024-10-16 07:06:54.749012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.471 [2024-10-16 07:06:54.749015] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749019] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13aa760): datao=0, datal=512, cccid=4 00:23:55.471 [2024-10-16 07:06:54.749023] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140aa80) on tqpair(0x13aa760): expected_datao=0, payload_size=512 00:23:55.471 [2024-10-16 07:06:54.749027] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749034] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749038] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749045] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.471 [2024-10-16 07:06:54.749051] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.471 [2024-10-16 07:06:54.749055] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749058] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13aa760): datao=0, datal=512, cccid=6 00:23:55.471 [2024-10-16 07:06:54.749063] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140ad80) on tqpair(0x13aa760): expected_datao=0, payload_size=512 00:23:55.471 [2024-10-16 07:06:54.749067] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749073] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749077] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749083] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.471 [2024-10-16 07:06:54.749089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.471 [2024-10-16 07:06:54.749092] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749096] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13aa760): datao=0, datal=4096, cccid=7 00:23:55.471 [2024-10-16 07:06:54.749100] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140af00) on tqpair(0x13aa760): expected_datao=0, payload_size=4096 00:23:55.471 [2024-10-16 07:06:54.749104] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749117] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749120] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749132] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.471 [2024-10-16 07:06:54.749138] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.471 [2024-10-16 07:06:54.749141] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749145] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140ac00) on tqpair=0x13aa760 00:23:55.471 [2024-10-16 07:06:54.749158] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.471 [2024-10-16 07:06:54.749164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.471 [2024-10-16 07:06:54.749167] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749171] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140aa80) on tqpair=0x13aa760 00:23:55.471 [2024-10-16 07:06:54.749182] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.471 [2024-10-16 07:06:54.749188] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.471 [2024-10-16 07:06:54.749191] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.471 [2024-10-16 07:06:54.749195] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140ad80) on tqpair=0x13aa760 00:23:55.472 [2024-10-16 07:06:54.749202] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.472 [2024-10-16 07:06:54.749208] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.472 [2024-10-16 07:06:54.749212] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.472 [2024-10-16 07:06:54.749216] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140af00) on tqpair=0x13aa760 00:23:55.472 ===================================================== 00:23:55.472 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.472 ===================================================== 00:23:55.472 Controller Capabilities/Features 00:23:55.472 ================================ 00:23:55.472 Vendor ID: 8086 00:23:55.472 Subsystem Vendor ID: 8086 00:23:55.472 Serial Number: SPDK00000000000001 00:23:55.472 Model Number: SPDK bdev Controller 00:23:55.472 Firmware Version: 25.01 00:23:55.472 Recommended Arb Burst: 6 00:23:55.472 IEEE OUI Identifier: e4 d2 5c 00:23:55.472 Multi-path I/O 00:23:55.472 May have multiple subsystem ports: Yes 00:23:55.472 May have multiple controllers: Yes 00:23:55.472 Associated with SR-IOV VF: No 00:23:55.472 Max Data Transfer Size: 131072 00:23:55.472 Max Number of Namespaces: 32 00:23:55.472 Max Number of I/O Queues: 127 00:23:55.472 NVMe Specification Version (VS): 1.3 00:23:55.472 NVMe Specification Version (Identify): 1.3 00:23:55.472 Maximum Queue Entries: 128 00:23:55.472 Contiguous Queues Required: Yes 00:23:55.472 Arbitration Mechanisms Supported 00:23:55.472 Weighted Round Robin: Not Supported 00:23:55.472 Vendor Specific: Not Supported 00:23:55.472 Reset Timeout: 15000 ms 00:23:55.472 Doorbell Stride: 4 bytes 00:23:55.472 NVM Subsystem Reset: Not Supported 00:23:55.472 Command Sets Supported 00:23:55.472 NVM Command Set: Supported 00:23:55.472 Boot Partition: Not Supported 00:23:55.472 Memory Page Size Minimum: 4096 bytes 00:23:55.472 Memory Page Size Maximum: 4096 bytes 00:23:55.472 Persistent Memory Region: Not Supported 00:23:55.472 Optional Asynchronous Events Supported 00:23:55.472 Namespace Attribute Notices: Supported 00:23:55.472 Firmware Activation Notices: Not Supported 00:23:55.472 ANA Change Notices: Not Supported 00:23:55.472 PLE Aggregate Log Change Notices: Not Supported 00:23:55.472 LBA Status Info Alert Notices: Not Supported 00:23:55.472 EGE Aggregate Log Change Notices: Not Supported 00:23:55.472 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.472 Zone Descriptor Change Notices: Not Supported 00:23:55.472 Discovery Log Change Notices: Not Supported 00:23:55.472 Controller Attributes 00:23:55.472 128-bit Host Identifier: Supported 00:23:55.472 Non-Operational Permissive Mode: Not Supported 00:23:55.472 NVM Sets: Not Supported 00:23:55.472 Read Recovery Levels: Not Supported 00:23:55.472 Endurance Groups: Not Supported 00:23:55.472 Predictable Latency Mode: Not Supported 00:23:55.472 Traffic Based Keep ALive: Not Supported 00:23:55.472 Namespace Granularity: Not Supported 00:23:55.472 SQ Associations: Not Supported 00:23:55.472 UUID List: Not Supported 00:23:55.472 Multi-Domain Subsystem: Not Supported 00:23:55.472 Fixed Capacity Management: Not Supported 00:23:55.472 Variable Capacity Management: Not Supported 00:23:55.472 Delete Endurance Group: Not Supported 00:23:55.472 Delete NVM Set: Not Supported 00:23:55.472 Extended LBA Formats Supported: Not Supported 00:23:55.472 Flexible Data Placement Supported: Not Supported 00:23:55.472 00:23:55.472 Controller Memory Buffer Support 00:23:55.472 ================================ 00:23:55.472 Supported: No 00:23:55.472 00:23:55.472 Persistent Memory Region Support 00:23:55.472 ================================ 00:23:55.472 Supported: No 00:23:55.472 00:23:55.472 Admin Command Set Attributes 00:23:55.472 ============================ 00:23:55.472 Security Send/Receive: Not Supported 00:23:55.472 Format NVM: Not Supported 00:23:55.472 Firmware Activate/Download: Not Supported 00:23:55.472 Namespace Management: Not Supported 00:23:55.472 Device Self-Test: Not Supported 00:23:55.472 Directives: Not Supported 00:23:55.472 NVMe-MI: Not Supported 00:23:55.472 Virtualization Management: Not Supported 00:23:55.472 Doorbell Buffer Config: Not Supported 00:23:55.472 Get LBA Status Capability: Not Supported 00:23:55.472 Command & Feature Lockdown Capability: Not Supported 00:23:55.472 Abort Command Limit: 4 00:23:55.472 Async Event Request Limit: 4 00:23:55.472 Number of Firmware Slots: N/A 00:23:55.472 Firmware Slot 1 Read-Only: N/A 00:23:55.472 Firmware Activation Without Reset: N/A 00:23:55.472 Multiple Update Detection Support: N/A 00:23:55.472 Firmware Update Granularity: No Information Provided 00:23:55.472 Per-Namespace SMART Log: No 00:23:55.472 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.472 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:55.472 Command Effects Log Page: Supported 00:23:55.472 Get Log Page Extended Data: Supported 00:23:55.472 Telemetry Log Pages: Not Supported 00:23:55.472 Persistent Event Log Pages: Not Supported 00:23:55.472 Supported Log Pages Log Page: May Support 00:23:55.472 Commands Supported & Effects Log Page: Not Supported 00:23:55.472 Feature Identifiers & Effects Log Page:May Support 00:23:55.472 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.472 Data Area 4 for Telemetry Log: Not Supported 00:23:55.472 Error Log Page Entries Supported: 128 00:23:55.472 Keep Alive: Supported 00:23:55.472 Keep Alive Granularity: 10000 ms 00:23:55.472 00:23:55.472 NVM Command Set Attributes 00:23:55.472 ========================== 00:23:55.472 Submission Queue Entry Size 00:23:55.472 Max: 64 00:23:55.472 Min: 64 00:23:55.472 Completion Queue Entry Size 00:23:55.472 Max: 16 00:23:55.472 Min: 16 00:23:55.472 Number of Namespaces: 32 00:23:55.472 Compare Command: Supported 00:23:55.472 Write Uncorrectable Command: Not Supported 00:23:55.472 Dataset Management Command: Supported 00:23:55.472 Write Zeroes Command: Supported 00:23:55.472 Set Features Save Field: Not Supported 00:23:55.472 Reservations: Supported 00:23:55.472 Timestamp: Not Supported 00:23:55.472 Copy: Supported 00:23:55.472 Volatile Write Cache: Present 00:23:55.472 Atomic Write Unit (Normal): 1 00:23:55.472 Atomic Write Unit (PFail): 1 00:23:55.472 Atomic Compare & Write Unit: 1 00:23:55.472 Fused Compare & Write: Supported 00:23:55.472 Scatter-Gather List 00:23:55.472 SGL Command Set: Supported 00:23:55.472 SGL Keyed: Supported 00:23:55.472 SGL Bit Bucket Descriptor: Not Supported 00:23:55.472 SGL Metadata Pointer: Not Supported 00:23:55.472 Oversized SGL: Not Supported 00:23:55.472 SGL Metadata Address: Not Supported 00:23:55.472 SGL Offset: Supported 00:23:55.472 Transport SGL Data Block: Not Supported 00:23:55.472 Replay Protected Memory Block: Not Supported 00:23:55.472 00:23:55.472 Firmware Slot Information 00:23:55.472 ========================= 00:23:55.472 Active slot: 1 00:23:55.472 Slot 1 Firmware Revision: 25.01 00:23:55.472 00:23:55.472 00:23:55.472 Commands Supported and Effects 00:23:55.472 ============================== 00:23:55.472 Admin Commands 00:23:55.472 -------------- 00:23:55.472 Get Log Page (02h): Supported 00:23:55.472 Identify (06h): Supported 00:23:55.472 Abort (08h): Supported 00:23:55.472 Set Features (09h): Supported 00:23:55.472 Get Features (0Ah): Supported 00:23:55.472 Asynchronous Event Request (0Ch): Supported 00:23:55.472 Keep Alive (18h): Supported 00:23:55.472 I/O Commands 00:23:55.472 ------------ 00:23:55.472 Flush (00h): Supported LBA-Change 00:23:55.472 Write (01h): Supported LBA-Change 00:23:55.472 Read (02h): Supported 00:23:55.472 Compare (05h): Supported 00:23:55.472 Write Zeroes (08h): Supported LBA-Change 00:23:55.472 Dataset Management (09h): Supported LBA-Change 00:23:55.472 Copy (19h): Supported LBA-Change 00:23:55.472 00:23:55.472 Error Log 00:23:55.472 ========= 00:23:55.472 00:23:55.472 Arbitration 00:23:55.472 =========== 00:23:55.472 Arbitration Burst: 1 00:23:55.472 00:23:55.472 Power Management 00:23:55.472 ================ 00:23:55.472 Number of Power States: 1 00:23:55.472 Current Power State: Power State #0 00:23:55.472 Power State #0: 00:23:55.472 Max Power: 0.00 W 00:23:55.472 Non-Operational State: Operational 00:23:55.472 Entry Latency: Not Reported 00:23:55.472 Exit Latency: Not Reported 00:23:55.472 Relative Read Throughput: 0 00:23:55.472 Relative Read Latency: 0 00:23:55.472 Relative Write Throughput: 0 00:23:55.472 Relative Write Latency: 0 00:23:55.472 Idle Power: Not Reported 00:23:55.472 Active Power: Not Reported 00:23:55.472 Non-Operational Permissive Mode: Not Supported 00:23:55.472 00:23:55.472 Health Information 00:23:55.472 ================== 00:23:55.472 Critical Warnings: 00:23:55.472 Available Spare Space: OK 00:23:55.472 Temperature: OK 00:23:55.472 Device Reliability: OK 00:23:55.472 Read Only: No 00:23:55.472 Volatile Memory Backup: OK 00:23:55.473 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:55.473 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:55.473 Available Spare: 0% 00:23:55.473 Available Spare Threshold: 0% 00:23:55.473 Life Percentage Used:[2024-10-16 07:06:54.749320] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.749326] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13aa760) 00:23:55.473 [2024-10-16 07:06:54.749333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.473 [2024-10-16 07:06:54.749345] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140af00, cid 7, qid 0 00:23:55.473 [2024-10-16 07:06:54.749534] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.473 [2024-10-16 07:06:54.749541] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.473 [2024-10-16 07:06:54.749544] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.749550] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140af00) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.749587] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:55.473 [2024-10-16 07:06:54.749597] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a480) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.749604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.473 [2024-10-16 07:06:54.749609] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a600) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.749614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.473 [2024-10-16 07:06:54.749619] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a780) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.749624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.473 [2024-10-16 07:06:54.749629] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a900) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.749633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.473 [2024-10-16 07:06:54.749642] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.749646] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.749650] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13aa760) 00:23:55.473 [2024-10-16 07:06:54.749657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.473 [2024-10-16 07:06:54.749669] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a900, cid 3, qid 0 00:23:55.473 [2024-10-16 07:06:54.749864] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.473 [2024-10-16 07:06:54.749871] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.473 [2024-10-16 07:06:54.749874] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.749878] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a900) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.749885] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.749889] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.749893] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13aa760) 00:23:55.473 [2024-10-16 07:06:54.749900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.473 [2024-10-16 07:06:54.749914] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a900, cid 3, qid 0 00:23:55.473 [2024-10-16 07:06:54.750127] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.473 [2024-10-16 07:06:54.750133] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.473 [2024-10-16 07:06:54.750136] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750140] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a900) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.750145] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:55.473 [2024-10-16 07:06:54.750150] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:55.473 [2024-10-16 07:06:54.750159] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750163] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750167] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13aa760) 00:23:55.473 [2024-10-16 07:06:54.750174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.473 [2024-10-16 07:06:54.750187] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a900, cid 3, qid 0 00:23:55.473 [2024-10-16 07:06:54.750361] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.473 [2024-10-16 07:06:54.750367] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.473 [2024-10-16 07:06:54.750370] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750374] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a900) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.750385] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750389] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750393] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13aa760) 00:23:55.473 [2024-10-16 07:06:54.750400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.473 [2024-10-16 07:06:54.750410] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a900, cid 3, qid 0 00:23:55.473 [2024-10-16 07:06:54.750598] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.473 [2024-10-16 07:06:54.750605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.473 [2024-10-16 07:06:54.750608] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750612] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a900) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.750622] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750626] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750630] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13aa760) 00:23:55.473 [2024-10-16 07:06:54.750636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.473 [2024-10-16 07:06:54.750647] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a900, cid 3, qid 0 00:23:55.473 [2024-10-16 07:06:54.750823] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.473 [2024-10-16 07:06:54.750829] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.473 [2024-10-16 07:06:54.750832] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.750836] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a900) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.754852] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.754859] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.754862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13aa760) 00:23:55.473 [2024-10-16 07:06:54.754869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.473 [2024-10-16 07:06:54.754881] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a900, cid 3, qid 0 00:23:55.473 [2024-10-16 07:06:54.755058] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.473 [2024-10-16 07:06:54.755064] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.473 [2024-10-16 07:06:54.755068] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.473 [2024-10-16 07:06:54.755072] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x140a900) on tqpair=0x13aa760 00:23:55.473 [2024-10-16 07:06:54.755080] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:23:55.473 0% 00:23:55.473 Data Units Read: 0 00:23:55.473 Data Units Written: 0 00:23:55.473 Host Read Commands: 0 00:23:55.473 Host Write Commands: 0 00:23:55.473 Controller Busy Time: 0 minutes 00:23:55.473 Power Cycles: 0 00:23:55.473 Power On Hours: 0 hours 00:23:55.473 Unsafe Shutdowns: 0 00:23:55.473 Unrecoverable Media Errors: 0 00:23:55.473 Lifetime Error Log Entries: 0 00:23:55.473 Warning Temperature Time: 0 minutes 00:23:55.473 Critical Temperature Time: 0 minutes 00:23:55.473 00:23:55.473 Number of Queues 00:23:55.473 ================ 00:23:55.473 Number of I/O Submission Queues: 127 00:23:55.473 Number of I/O Completion Queues: 127 00:23:55.473 00:23:55.473 Active Namespaces 00:23:55.473 ================= 00:23:55.473 Namespace ID:1 00:23:55.473 Error Recovery Timeout: Unlimited 00:23:55.473 Command Set Identifier: NVM (00h) 00:23:55.473 Deallocate: Supported 00:23:55.473 Deallocated/Unwritten Error: Not Supported 00:23:55.473 Deallocated Read Value: Unknown 00:23:55.473 Deallocate in Write Zeroes: Not Supported 00:23:55.474 Deallocated Guard Field: 0xFFFF 00:23:55.474 Flush: Supported 00:23:55.474 Reservation: Supported 00:23:55.474 Namespace Sharing Capabilities: Multiple Controllers 00:23:55.474 Size (in LBAs): 131072 (0GiB) 00:23:55.474 Capacity (in LBAs): 131072 (0GiB) 00:23:55.474 Utilization (in LBAs): 131072 (0GiB) 00:23:55.474 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:55.474 EUI64: ABCDEF0123456789 00:23:55.474 UUID: ce5570f3-a8f0-4c8e-afdc-10042a7c1089 00:23:55.474 Thin Provisioning: Not Supported 00:23:55.474 Per-NS Atomic Units: Yes 00:23:55.474 Atomic Boundary Size (Normal): 0 00:23:55.474 Atomic Boundary Size (PFail): 0 00:23:55.474 Atomic Boundary Offset: 0 00:23:55.474 Maximum Single Source Range Length: 65535 00:23:55.474 Maximum Copy Length: 65535 00:23:55.474 Maximum Source Range Count: 1 00:23:55.474 NGUID/EUI64 Never Reused: No 00:23:55.474 Namespace Write Protected: No 00:23:55.474 Number of LBA Formats: 1 00:23:55.474 Current LBA Format: LBA Format #00 00:23:55.474 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:55.474 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.474 rmmod nvme_tcp 00:23:55.474 rmmod nvme_fabrics 00:23:55.474 rmmod nvme_keyring 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 3217461 ']' 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 3217461 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3217461 ']' 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3217461 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3217461 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3217461' 00:23:55.474 killing process with pid 3217461 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3217461 00:23:55.474 07:06:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3217461 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.736 07:06:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.285 00:23:58.285 real 0m11.674s 00:23:58.285 user 0m8.432s 00:23:58.285 sys 0m6.222s 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:58.285 ************************************ 00:23:58.285 END TEST nvmf_identify 00:23:58.285 ************************************ 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.285 ************************************ 00:23:58.285 START TEST nvmf_perf 00:23:58.285 ************************************ 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:58.285 * Looking for test storage... 00:23:58.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.285 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:58.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.286 --rc genhtml_branch_coverage=1 00:23:58.286 --rc genhtml_function_coverage=1 00:23:58.286 --rc genhtml_legend=1 00:23:58.286 --rc geninfo_all_blocks=1 00:23:58.286 --rc geninfo_unexecuted_blocks=1 00:23:58.286 00:23:58.286 ' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:58.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.286 --rc genhtml_branch_coverage=1 00:23:58.286 --rc genhtml_function_coverage=1 00:23:58.286 --rc genhtml_legend=1 00:23:58.286 --rc geninfo_all_blocks=1 00:23:58.286 --rc geninfo_unexecuted_blocks=1 00:23:58.286 00:23:58.286 ' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:58.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.286 --rc genhtml_branch_coverage=1 00:23:58.286 --rc genhtml_function_coverage=1 00:23:58.286 --rc genhtml_legend=1 00:23:58.286 --rc geninfo_all_blocks=1 00:23:58.286 --rc geninfo_unexecuted_blocks=1 00:23:58.286 00:23:58.286 ' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:58.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.286 --rc genhtml_branch_coverage=1 00:23:58.286 --rc genhtml_function_coverage=1 00:23:58.286 --rc genhtml_legend=1 00:23:58.286 --rc geninfo_all_blocks=1 00:23:58.286 --rc geninfo_unexecuted_blocks=1 00:23:58.286 00:23:58.286 ' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.286 07:06:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:06.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:06.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:06.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:06.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.429 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.430 07:07:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:24:06.430 00:24:06.430 --- 10.0.0.2 ping statistics --- 00:24:06.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.430 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:24:06.430 00:24:06.430 --- 10.0.0.1 ping statistics --- 00:24:06.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.430 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=3221994 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 3221994 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3221994 ']' 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.430 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.430 [2024-10-16 07:07:05.156608] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:24:06.430 [2024-10-16 07:07:05.156677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.430 [2024-10-16 07:07:05.245029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.430 [2024-10-16 07:07:05.298757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.430 [2024-10-16 07:07:05.298815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.430 [2024-10-16 07:07:05.298825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.430 [2024-10-16 07:07:05.298832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.430 [2024-10-16 07:07:05.298838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.430 [2024-10-16 07:07:05.300901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.430 [2024-10-16 07:07:05.300990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.430 [2024-10-16 07:07:05.301146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.430 [2024-10-16 07:07:05.301146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.691 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.691 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:06.691 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:06.691 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.691 07:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.691 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.691 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:06.691 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:07.264 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:07.264 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:07.264 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:07.264 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.526 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:07.526 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:07.526 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:07.526 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:07.526 07:07:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:07.787 [2024-10-16 07:07:07.127742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.787 07:07:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:08.048 07:07:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:08.048 07:07:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.309 07:07:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:08.309 07:07:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:08.309 07:07:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.571 [2024-10-16 07:07:07.899372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.571 07:07:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:08.831 07:07:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:08.831 07:07:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:08.831 07:07:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:08.831 07:07:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:10.216 Initializing NVMe Controllers 00:24:10.216 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:10.216 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:10.216 Initialization complete. Launching workers. 00:24:10.216 ======================================================== 00:24:10.216 Latency(us) 00:24:10.216 Device Information : IOPS MiB/s Average min max 00:24:10.216 PCIE (0000:65:00.0) NSID 1 from core 0: 77582.57 303.06 411.78 14.87 8365.97 00:24:10.216 ======================================================== 00:24:10.216 Total : 77582.57 303.06 411.78 14.87 8365.97 00:24:10.216 00:24:10.216 07:07:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.159 Initializing NVMe Controllers 00:24:11.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.159 Initialization complete. Launching workers. 00:24:11.159 ======================================================== 00:24:11.159 Latency(us) 00:24:11.159 Device Information : IOPS MiB/s Average min max 00:24:11.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 101.00 0.39 10206.30 204.00 46013.96 00:24:11.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.00 0.21 19065.45 7954.79 47893.09 00:24:11.159 ======================================================== 00:24:11.159 Total : 155.00 0.61 13292.71 204.00 47893.09 00:24:11.159 00:24:11.419 07:07:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:12.803 Initializing NVMe Controllers 00:24:12.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:12.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:12.803 Initialization complete. Launching workers. 00:24:12.803 ======================================================== 00:24:12.803 Latency(us) 00:24:12.803 Device Information : IOPS MiB/s Average min max 00:24:12.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11751.99 45.91 2725.66 456.83 6797.90 00:24:12.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3827.00 14.95 8408.10 5484.97 15902.55 00:24:12.803 ======================================================== 00:24:12.803 Total : 15578.98 60.86 4121.56 456.83 15902.55 00:24:12.803 00:24:12.803 07:07:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:12.803 07:07:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:12.803 07:07:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.348 Initializing NVMe Controllers 00:24:15.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.348 Controller IO queue size 128, less than required. 00:24:15.348 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.348 Controller IO queue size 128, less than required. 00:24:15.348 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:15.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:15.348 Initialization complete. Launching workers. 00:24:15.348 ======================================================== 00:24:15.348 Latency(us) 00:24:15.348 Device Information : IOPS MiB/s Average min max 00:24:15.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1904.17 476.04 68665.16 39497.22 116121.57 00:24:15.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.10 144.77 229696.69 60315.97 389388.10 00:24:15.348 ======================================================== 00:24:15.348 Total : 2483.27 620.82 106217.58 39497.22 389388.10 00:24:15.348 00:24:15.348 07:07:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:15.348 No valid NVMe controllers or AIO or URING devices found 00:24:15.348 Initializing NVMe Controllers 00:24:15.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.348 Controller IO queue size 128, less than required. 00:24:15.348 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.348 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:15.348 Controller IO queue size 128, less than required. 00:24:15.348 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.348 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:15.348 WARNING: Some requested NVMe devices were skipped 00:24:15.348 07:07:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:17.900 Initializing NVMe Controllers 00:24:17.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.900 Controller IO queue size 128, less than required. 00:24:17.900 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.900 Controller IO queue size 128, less than required. 00:24:17.900 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:17.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:17.900 Initialization complete. Launching workers. 00:24:17.900 00:24:17.900 ==================== 00:24:17.900 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:17.900 TCP transport: 00:24:17.900 polls: 40990 00:24:17.900 idle_polls: 24227 00:24:17.900 sock_completions: 16763 00:24:17.900 nvme_completions: 8777 00:24:17.900 submitted_requests: 13060 00:24:17.900 queued_requests: 1 00:24:17.900 00:24:17.900 ==================== 00:24:17.900 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:17.900 TCP transport: 00:24:17.900 polls: 42254 00:24:17.900 idle_polls: 26684 00:24:17.900 sock_completions: 15570 00:24:17.900 nvme_completions: 6993 00:24:17.900 submitted_requests: 10454 00:24:17.900 queued_requests: 1 00:24:17.900 ======================================================== 00:24:17.900 Latency(us) 00:24:17.900 Device Information : IOPS MiB/s Average min max 00:24:17.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2193.95 548.49 59019.10 33332.50 111750.71 00:24:17.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1747.96 436.99 74597.57 31558.96 119034.88 00:24:17.900 ======================================================== 00:24:17.900 Total : 3941.90 985.48 65927.06 31558.96 119034.88 00:24:17.900 00:24:17.900 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:17.900 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.163 rmmod nvme_tcp 00:24:18.163 rmmod nvme_fabrics 00:24:18.163 rmmod nvme_keyring 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 3221994 ']' 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 3221994 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3221994 ']' 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3221994 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3221994 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3221994' 00:24:18.163 killing process with pid 3221994 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3221994 00:24:18.163 07:07:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3221994 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.078 07:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:22.622 00:24:22.622 real 0m24.331s 00:24:22.622 user 0m58.495s 00:24:22.622 sys 0m8.633s 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:22.622 ************************************ 00:24:22.622 END TEST nvmf_perf 00:24:22.622 ************************************ 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.622 ************************************ 00:24:22.622 START TEST nvmf_fio_host 00:24:22.622 ************************************ 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:22.622 * Looking for test storage... 00:24:22.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.622 --rc genhtml_branch_coverage=1 00:24:22.622 --rc genhtml_function_coverage=1 00:24:22.622 --rc genhtml_legend=1 00:24:22.622 --rc geninfo_all_blocks=1 00:24:22.622 --rc geninfo_unexecuted_blocks=1 00:24:22.622 00:24:22.622 ' 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.622 --rc genhtml_branch_coverage=1 00:24:22.622 --rc genhtml_function_coverage=1 00:24:22.622 --rc genhtml_legend=1 00:24:22.622 --rc geninfo_all_blocks=1 00:24:22.622 --rc geninfo_unexecuted_blocks=1 00:24:22.622 00:24:22.622 ' 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.622 --rc genhtml_branch_coverage=1 00:24:22.622 --rc genhtml_function_coverage=1 00:24:22.622 --rc genhtml_legend=1 00:24:22.622 --rc geninfo_all_blocks=1 00:24:22.622 --rc geninfo_unexecuted_blocks=1 00:24:22.622 00:24:22.622 ' 00:24:22.622 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.622 --rc genhtml_branch_coverage=1 00:24:22.622 --rc genhtml_function_coverage=1 00:24:22.622 --rc genhtml_legend=1 00:24:22.622 --rc geninfo_all_blocks=1 00:24:22.622 --rc geninfo_unexecuted_blocks=1 00:24:22.622 00:24:22.622 ' 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:22.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:22.623 07:07:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:30.766 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:30.766 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:30.766 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:30.766 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:30.766 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:30.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:24:30.767 00:24:30.767 --- 10.0.0.2 ping statistics --- 00:24:30.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.767 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:24:30.767 00:24:30.767 --- 10.0.0.1 ping statistics --- 00:24:30.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.767 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3228889 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3228889 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3228889 ']' 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:30.767 07:07:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.767 [2024-10-16 07:07:29.464347] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:24:30.767 [2024-10-16 07:07:29.464416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.767 [2024-10-16 07:07:29.552531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.767 [2024-10-16 07:07:29.605541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.767 [2024-10-16 07:07:29.605595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.767 [2024-10-16 07:07:29.605609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.767 [2024-10-16 07:07:29.605616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.767 [2024-10-16 07:07:29.605622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.767 [2024-10-16 07:07:29.607709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.767 [2024-10-16 07:07:29.607888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.767 [2024-10-16 07:07:29.608002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.767 [2024-10-16 07:07:29.608004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.030 07:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.030 07:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:31.030 07:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:31.030 [2024-10-16 07:07:30.452359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.030 07:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:31.030 07:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:31.030 07:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.292 07:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:31.292 Malloc1 00:24:31.292 07:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:31.555 07:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:31.816 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.816 [2024-10-16 07:07:31.301580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:32.077 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:32.368 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:32.368 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:32.368 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:32.368 07:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.629 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:32.629 fio-3.35 00:24:32.629 Starting 1 thread 00:24:35.173 00:24:35.173 test: (groupid=0, jobs=1): err= 0: pid=3229711: Wed Oct 16 07:07:34 2024 00:24:35.173 read: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(107MiB/2004msec) 00:24:35.173 slat (usec): min=2, max=299, avg= 2.18, stdev= 2.56 00:24:35.173 clat (usec): min=3512, max=9184, avg=5162.12, stdev=405.10 00:24:35.173 lat (usec): min=3514, max=9190, avg=5164.30, stdev=405.36 00:24:35.173 clat percentiles (usec): 00:24:35.173 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:24:35.173 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5211], 00:24:35.173 | 70.00th=[ 5342], 80.00th=[ 5407], 90.00th=[ 5604], 95.00th=[ 5735], 00:24:35.173 | 99.00th=[ 6128], 99.50th=[ 7046], 99.90th=[ 8455], 99.95th=[ 8848], 00:24:35.173 | 99.99th=[ 8979] 00:24:35.173 bw ( KiB/s): min=51936, max=55696, per=99.92%, avg=54666.00, stdev=1822.47, samples=4 00:24:35.173 iops : min=12984, max=13922, avg=13666.00, stdev=455.24, samples=4 00:24:35.173 write: IOPS=13.7k, BW=53.3MiB/s (55.9MB/s)(107MiB/2004msec); 0 zone resets 00:24:35.173 slat (usec): min=2, max=282, avg= 2.26, stdev= 1.87 00:24:35.173 clat (usec): min=2961, max=7851, avg=4171.48, stdev=338.01 00:24:35.173 lat (usec): min=2979, max=7891, avg=4173.74, stdev=338.35 00:24:35.173 clat percentiles (usec): 00:24:35.173 | 1.00th=[ 3490], 5.00th=[ 3687], 10.00th=[ 3818], 20.00th=[ 3916], 00:24:35.173 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:24:35.173 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4621], 00:24:35.173 | 99.00th=[ 5014], 99.50th=[ 5866], 99.90th=[ 7111], 99.95th=[ 7373], 00:24:35.173 | 99.99th=[ 7635] 00:24:35.173 bw ( KiB/s): min=52296, max=55680, per=100.00%, avg=54626.00, stdev=1565.88, samples=4 00:24:35.173 iops : min=13074, max=13920, avg=13656.50, stdev=391.47, samples=4 00:24:35.173 lat (msec) : 4=14.28%, 10=85.72% 00:24:35.173 cpu : usr=73.99%, sys=24.66%, ctx=33, majf=0, minf=8 00:24:35.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:35.173 issued rwts: total=27410,27363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:35.173 00:24:35.173 Run status group 0 (all jobs): 00:24:35.173 READ: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=107MiB (112MB), run=2004-2004msec 00:24:35.173 WRITE: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2004-2004msec 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:35.173 07:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:35.173 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:35.173 fio-3.35 00:24:35.173 Starting 1 thread 00:24:37.717 00:24:37.717 test: (groupid=0, jobs=1): err= 0: pid=3230250: Wed Oct 16 07:07:36 2024 00:24:37.717 read: IOPS=9775, BW=153MiB/s (160MB/s)(306MiB/2003msec) 00:24:37.717 slat (usec): min=3, max=114, avg= 3.59, stdev= 1.56 00:24:37.717 clat (usec): min=1352, max=16957, avg=7989.08, stdev=1889.81 00:24:37.717 lat (usec): min=1355, max=16960, avg=7992.66, stdev=1889.91 00:24:37.717 clat percentiles (usec): 00:24:37.717 | 1.00th=[ 3916], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6259], 00:24:37.717 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8455], 00:24:37.717 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[10945], 00:24:37.717 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13304], 99.95th=[13829], 00:24:37.717 | 99.99th=[14222] 00:24:37.717 bw ( KiB/s): min=70272, max=88320, per=49.18%, avg=76920.00, stdev=8132.05, samples=4 00:24:37.717 iops : min= 4392, max= 5520, avg=4807.50, stdev=508.25, samples=4 00:24:37.717 write: IOPS=5695, BW=89.0MiB/s (93.3MB/s)(157MiB/1769msec); 0 zone resets 00:24:37.717 slat (usec): min=39, max=328, avg=40.76, stdev= 6.04 00:24:37.717 clat (usec): min=3155, max=13536, avg=9012.95, stdev=1273.46 00:24:37.717 lat (usec): min=3195, max=13576, avg=9053.71, stdev=1274.49 00:24:37.717 clat percentiles (usec): 00:24:37.717 | 1.00th=[ 6325], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7898], 00:24:37.717 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:24:37.717 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:24:37.717 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13304], 99.95th=[13435], 00:24:37.717 | 99.99th=[13435] 00:24:37.717 bw ( KiB/s): min=74112, max=91648, per=87.77%, avg=79992.00, stdev=8159.62, samples=4 00:24:37.717 iops : min= 4632, max= 5728, avg=4999.50, stdev=509.98, samples=4 00:24:37.717 lat (msec) : 2=0.06%, 4=0.70%, 10=80.77%, 20=18.48% 00:24:37.717 cpu : usr=87.36%, sys=11.64%, ctx=14, majf=0, minf=26 00:24:37.717 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:37.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:37.717 issued rwts: total=19581,10076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:37.717 00:24:37.717 Run status group 0 (all jobs): 00:24:37.717 READ: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=306MiB (321MB), run=2003-2003msec 00:24:37.717 WRITE: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=157MiB (165MB), run=1769-1769msec 00:24:37.717 07:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.717 rmmod nvme_tcp 00:24:37.717 rmmod nvme_fabrics 00:24:37.717 rmmod nvme_keyring 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 3228889 ']' 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 3228889 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3228889 ']' 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3228889 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.717 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3228889 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3228889' 00:24:37.977 killing process with pid 3228889 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3228889 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3228889 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.977 07:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.523 00:24:40.523 real 0m17.744s 00:24:40.523 user 1m2.399s 00:24:40.523 sys 0m7.700s 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.523 ************************************ 00:24:40.523 END TEST nvmf_fio_host 00:24:40.523 ************************************ 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.523 ************************************ 00:24:40.523 START TEST nvmf_failover 00:24:40.523 ************************************ 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:40.523 * Looking for test storage... 00:24:40.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.523 --rc genhtml_branch_coverage=1 00:24:40.523 --rc genhtml_function_coverage=1 00:24:40.523 --rc genhtml_legend=1 00:24:40.523 --rc geninfo_all_blocks=1 00:24:40.523 --rc geninfo_unexecuted_blocks=1 00:24:40.523 00:24:40.523 ' 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.523 --rc genhtml_branch_coverage=1 00:24:40.523 --rc genhtml_function_coverage=1 00:24:40.523 --rc genhtml_legend=1 00:24:40.523 --rc geninfo_all_blocks=1 00:24:40.523 --rc geninfo_unexecuted_blocks=1 00:24:40.523 00:24:40.523 ' 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.523 --rc genhtml_branch_coverage=1 00:24:40.523 --rc genhtml_function_coverage=1 00:24:40.523 --rc genhtml_legend=1 00:24:40.523 --rc geninfo_all_blocks=1 00:24:40.523 --rc geninfo_unexecuted_blocks=1 00:24:40.523 00:24:40.523 ' 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.523 --rc genhtml_branch_coverage=1 00:24:40.523 --rc genhtml_function_coverage=1 00:24:40.523 --rc genhtml_legend=1 00:24:40.523 --rc geninfo_all_blocks=1 00:24:40.523 --rc geninfo_unexecuted_blocks=1 00:24:40.523 00:24:40.523 ' 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.523 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.524 07:07:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:48.667 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:48.667 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:48.667 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:48.667 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.667 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.668 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.668 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.668 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.668 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.668 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.668 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.668 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.668 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.668 07:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:24:48.668 00:24:48.668 --- 10.0.0.2 ping statistics --- 00:24:48.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.668 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:24:48.668 00:24:48.668 --- 10.0.0.1 ping statistics --- 00:24:48.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.668 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=3234913 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 3234913 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3234913 ']' 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.668 07:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.668 [2024-10-16 07:07:47.278379] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:24:48.668 [2024-10-16 07:07:47.278448] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.668 [2024-10-16 07:07:47.368160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:48.668 [2024-10-16 07:07:47.419863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.668 [2024-10-16 07:07:47.419916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.668 [2024-10-16 07:07:47.419925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.668 [2024-10-16 07:07:47.419932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.668 [2024-10-16 07:07:47.419939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.668 [2024-10-16 07:07:47.422061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.668 [2024-10-16 07:07:47.422223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.668 [2024-10-16 07:07:47.422224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.668 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:48.668 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:48.668 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:48.668 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:48.668 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.668 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.668 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:48.929 [2024-10-16 07:07:48.298270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.929 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:49.190 Malloc0 00:24:49.190 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:49.450 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:49.450 07:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.712 [2024-10-16 07:07:49.099814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.712 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:49.973 [2024-10-16 07:07:49.296303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:49.973 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:49.973 [2024-10-16 07:07:49.468807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:50.233 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3235417 00:24:50.233 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:50.233 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.233 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3235417 /var/tmp/bdevperf.sock 00:24:50.233 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3235417 ']' 00:24:50.233 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.233 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.233 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.234 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.234 07:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:51.175 07:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.175 07:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:51.175 07:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:51.435 NVMe0n1 00:24:51.435 07:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:51.696 00:24:51.696 07:07:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3235690 00:24:51.696 07:07:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.696 07:07:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:52.637 07:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.898 [2024-10-16 07:07:52.269519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 [2024-10-16 07:07:52.269628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffe010 is same with the state(6) to be set 00:24:52.898 07:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:56.196 07:07:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:56.196 00:24:56.196 07:07:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:56.456 [2024-10-16 07:07:55.844189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 [2024-10-16 07:07:55.844288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffedc0 is same with the state(6) to be set 00:24:56.456 07:07:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:59.816 07:07:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.816 [2024-10-16 07:07:59.032377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.816 07:07:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:00.819 07:08:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:00.819 [2024-10-16 07:08:00.221234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.819 [2024-10-16 07:08:00.221423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 [2024-10-16 07:08:00.221469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fffd10 is same with the state(6) to be set 00:25:00.820 07:08:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3235690 00:25:07.414 { 00:25:07.415 "results": [ 00:25:07.415 { 00:25:07.415 "job": "NVMe0n1", 00:25:07.415 "core_mask": "0x1", 00:25:07.415 "workload": "verify", 00:25:07.415 "status": "finished", 00:25:07.415 "verify_range": { 00:25:07.415 "start": 0, 00:25:07.415 "length": 16384 00:25:07.415 }, 00:25:07.415 "queue_depth": 128, 00:25:07.415 "io_size": 4096, 00:25:07.415 "runtime": 15.002206, 00:25:07.415 "iops": 12442.836740143417, 00:25:07.415 "mibps": 48.60483101618522, 00:25:07.415 "io_failed": 11469, 00:25:07.415 "io_timeout": 0, 00:25:07.415 "avg_latency_us": 9671.277009641379, 00:25:07.415 "min_latency_us": 535.8933333333333, 00:25:07.415 "max_latency_us": 13817.173333333334 00:25:07.415 } 00:25:07.415 ], 00:25:07.415 "core_count": 1 00:25:07.415 } 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3235417 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3235417 ']' 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3235417 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3235417 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3235417' 00:25:07.415 killing process with pid 3235417 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3235417 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3235417 00:25:07.415 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:07.415 [2024-10-16 07:07:49.548557] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:25:07.415 [2024-10-16 07:07:49.548621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235417 ] 00:25:07.415 [2024-10-16 07:07:49.628089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.415 [2024-10-16 07:07:49.663884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.415 Running I/O for 15 seconds... 00:25:07.415 11021.00 IOPS, 43.05 MiB/s [2024-10-16T05:08:06.914Z] [2024-10-16 07:07:52.271485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.415 [2024-10-16 07:07:52.271521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.415 [2024-10-16 07:07:52.271540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.415 [2024-10-16 07:07:52.271557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.415 [2024-10-16 07:07:52.271573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd53e30 is same with the state(6) to be set 00:25:07.415 [2024-10-16 07:07:52.271637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.415 [2024-10-16 07:07:52.271647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.271989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.271998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.272006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.272015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.272022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.272031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.272038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.272048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.272055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.272065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.272072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.415 [2024-10-16 07:07:52.272081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.415 [2024-10-16 07:07:52.272088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.416 [2024-10-16 07:07:52.272240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.416 [2024-10-16 07:07:52.272257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.416 [2024-10-16 07:07:52.272273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.416 [2024-10-16 07:07:52.272290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.416 [2024-10-16 07:07:52.272307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.416 [2024-10-16 07:07:52.272323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.416 [2024-10-16 07:07:52.272340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.416 [2024-10-16 07:07:52.272787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.416 [2024-10-16 07:07:52.272796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.272987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.272995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.417 [2024-10-16 07:07:52.273312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.417 [2024-10-16 07:07:52.273506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.417 [2024-10-16 07:07:52.273513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.418 [2024-10-16 07:07:52.273530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.418 [2024-10-16 07:07:52.273547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.418 [2024-10-16 07:07:52.273563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.418 [2024-10-16 07:07:52.273579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:52.273596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:52.273612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:52.273629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:52.273645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:52.273662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:52.273680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:52.273696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:52.273713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.418 [2024-10-16 07:07:52.273729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.418 [2024-10-16 07:07:52.273745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.418 [2024-10-16 07:07:52.273762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.418 [2024-10-16 07:07:52.273778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.418 [2024-10-16 07:07:52.273804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.418 [2024-10-16 07:07:52.273811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95008 len:8 PRP1 0x0 PRP2 0x0 00:25:07.418 [2024-10-16 07:07:52.273818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:52.273861] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd74f30 was disconnected and freed. reset controller. 00:25:07.418 [2024-10-16 07:07:52.273871] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:07.418 [2024-10-16 07:07:52.273879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.418 [2024-10-16 07:07:52.277425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.418 [2024-10-16 07:07:52.277448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd53e30 (9): Bad file descriptor 00:25:07.418 [2024-10-16 07:07:52.313672] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.418 11111.50 IOPS, 43.40 MiB/s [2024-10-16T05:08:06.917Z] 11306.00 IOPS, 44.16 MiB/s [2024-10-16T05:08:06.917Z] 11679.50 IOPS, 45.62 MiB/s [2024-10-16T05:08:06.917Z] [2024-10-16 07:07:55.844485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.418 [2024-10-16 07:07:55.844756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.418 [2024-10-16 07:07:55.844761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.419 [2024-10-16 07:07:55.844772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.419 [2024-10-16 07:07:55.844784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.419 [2024-10-16 07:07:55.844795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.419 [2024-10-16 07:07:55.844807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.419 [2024-10-16 07:07:55.844904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.419 [2024-10-16 07:07:55.844916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.844988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.844995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.419 [2024-10-16 07:07:55.845158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.419 [2024-10-16 07:07:55.845163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.420 [2024-10-16 07:07:55.845653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.420 [2024-10-16 07:07:55.845660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.421 [2024-10-16 07:07:55.845860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.421 [2024-10-16 07:07:55.845872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.421 [2024-10-16 07:07:55.845885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.421 [2024-10-16 07:07:55.845897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.421 [2024-10-16 07:07:55.845908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.421 [2024-10-16 07:07:55.845919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.421 [2024-10-16 07:07:55.845931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.845994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.845999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.846006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.421 [2024-10-16 07:07:55.846011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.846018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76fe0 is same with the state(6) to be set 00:25:07.421 [2024-10-16 07:07:55.846026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.421 [2024-10-16 07:07:55.846030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.421 [2024-10-16 07:07:55.846035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56368 len:8 PRP1 0x0 PRP2 0x0 00:25:07.421 [2024-10-16 07:07:55.846040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.846070] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd76fe0 was disconnected and freed. reset controller. 00:25:07.421 [2024-10-16 07:07:55.846078] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:07.421 [2024-10-16 07:07:55.846094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.421 [2024-10-16 07:07:55.846100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.846106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.421 [2024-10-16 07:07:55.846111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.846116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.421 [2024-10-16 07:07:55.846125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.846131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.421 [2024-10-16 07:07:55.846136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.421 [2024-10-16 07:07:55.846141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.421 [2024-10-16 07:07:55.846167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd53e30 (9): Bad file descriptor 00:25:07.421 [2024-10-16 07:07:55.848621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.421 [2024-10-16 07:07:56.004715] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.421 11540.00 IOPS, 45.08 MiB/s [2024-10-16T05:08:06.921Z] 11798.00 IOPS, 46.09 MiB/s [2024-10-16T05:08:06.921Z] 11980.14 IOPS, 46.80 MiB/s [2024-10-16T05:08:06.921Z] 12094.75 IOPS, 47.25 MiB/s [2024-10-16T05:08:06.921Z] 12202.44 IOPS, 47.67 MiB/s [2024-10-16T05:08:06.921Z] [2024-10-16 07:08:00.221912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.221942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.221957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.221964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.221972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.221979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.221988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.221994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.422 [2024-10-16 07:08:00.222428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.422 [2024-10-16 07:08:00.222440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.422 [2024-10-16 07:08:00.222452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.422 [2024-10-16 07:08:00.222458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.422 [2024-10-16 07:08:00.222463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.423 [2024-10-16 07:08:00.222710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.423 [2024-10-16 07:08:00.222800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-10-16 07:08:00.222805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.222817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.222829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.222841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.222856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.222868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.222879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.222891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.222903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.222914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.222925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.222937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.222950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.222962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.222973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.222985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.222991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.222997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.223101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.223112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.223123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.223135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.223147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.223158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.223170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.424 [2024-10-16 07:08:00.223183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.424 [2024-10-16 07:08:00.223275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-10-16 07:08:00.223282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.425 [2024-10-16 07:08:00.223491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.425 [2024-10-16 07:08:00.223515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.425 [2024-10-16 07:08:00.223520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40504 len:8 PRP1 0x0 PRP2 0x0 00:25:07.425 [2024-10-16 07:08:00.223525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223558] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd812c0 was disconnected and freed. reset controller. 00:25:07.425 [2024-10-16 07:08:00.223565] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:07.425 [2024-10-16 07:08:00.223585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.425 [2024-10-16 07:08:00.223593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.425 [2024-10-16 07:08:00.223604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.425 [2024-10-16 07:08:00.223615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.425 [2024-10-16 07:08:00.223626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-10-16 07:08:00.223632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.425 [2024-10-16 07:08:00.226085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.425 [2024-10-16 07:08:00.226104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd53e30 (9): Bad file descriptor 00:25:07.425 [2024-10-16 07:08:00.258301] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.425 12227.80 IOPS, 47.76 MiB/s [2024-10-16T05:08:06.924Z] 12285.00 IOPS, 47.99 MiB/s [2024-10-16T05:08:06.924Z] 12330.67 IOPS, 48.17 MiB/s [2024-10-16T05:08:06.924Z] 12376.08 IOPS, 48.34 MiB/s [2024-10-16T05:08:06.924Z] 12408.57 IOPS, 48.47 MiB/s 00:25:07.425 Latency(us) 00:25:07.425 [2024-10-16T05:08:06.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.425 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:07.425 Verification LBA range: start 0x0 length 0x4000 00:25:07.425 NVMe0n1 : 15.00 12442.84 48.60 764.49 0.00 9671.28 535.89 13817.17 00:25:07.425 [2024-10-16T05:08:06.924Z] =================================================================================================================== 00:25:07.425 [2024-10-16T05:08:06.924Z] Total : 12442.84 48.60 764.49 0.00 9671.28 535.89 13817.17 00:25:07.425 Received shutdown signal, test time was about 15.000000 seconds 00:25:07.425 00:25:07.425 Latency(us) 00:25:07.425 [2024-10-16T05:08:06.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.425 [2024-10-16T05:08:06.924Z] =================================================================================================================== 00:25:07.425 [2024-10-16T05:08:06.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3238632 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3238632 /var/tmp/bdevperf.sock 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3238632 ']' 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.425 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.426 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.426 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.426 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.426 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:07.426 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:07.426 [2024-10-16 07:08:06.825951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:07.426 07:08:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:07.688 [2024-10-16 07:08:07.010412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:07.688 07:08:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:07.948 NVMe0n1 00:25:07.948 07:08:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:08.208 00:25:08.469 07:08:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:08.469 00:25:08.469 07:08:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.469 07:08:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:08.731 07:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.992 07:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:12.295 07:08:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.295 07:08:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:12.295 07:08:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.295 07:08:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3239642 00:25:12.295 07:08:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3239642 00:25:13.236 { 00:25:13.236 "results": [ 00:25:13.236 { 00:25:13.236 "job": "NVMe0n1", 00:25:13.236 "core_mask": "0x1", 00:25:13.236 "workload": "verify", 00:25:13.236 "status": "finished", 00:25:13.236 "verify_range": { 00:25:13.236 "start": 0, 00:25:13.236 "length": 16384 00:25:13.236 }, 00:25:13.236 "queue_depth": 128, 00:25:13.236 "io_size": 4096, 00:25:13.236 "runtime": 1.005514, 00:25:13.236 "iops": 12950.590444290185, 00:25:13.236 "mibps": 50.588243923008534, 00:25:13.236 "io_failed": 0, 00:25:13.236 "io_timeout": 0, 00:25:13.236 "avg_latency_us": 9847.760241642349, 00:25:13.236 "min_latency_us": 2225.4933333333333, 00:25:13.236 "max_latency_us": 13926.4 00:25:13.236 } 00:25:13.236 ], 00:25:13.236 "core_count": 1 00:25:13.236 } 00:25:13.236 07:08:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.236 [2024-10-16 07:08:06.486644] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:25:13.236 [2024-10-16 07:08:06.486702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3238632 ] 00:25:13.236 [2024-10-16 07:08:06.562449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.236 [2024-10-16 07:08:06.590639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.236 [2024-10-16 07:08:08.284337] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:13.236 [2024-10-16 07:08:08.284373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.236 [2024-10-16 07:08:08.284381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.236 [2024-10-16 07:08:08.284388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.236 [2024-10-16 07:08:08.284394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.236 [2024-10-16 07:08:08.284400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.236 [2024-10-16 07:08:08.284405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.236 [2024-10-16 07:08:08.284410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.236 [2024-10-16 07:08:08.284415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.236 [2024-10-16 07:08:08.284425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.236 [2024-10-16 07:08:08.284445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.236 [2024-10-16 07:08:08.284457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1144e30 (9): Bad file descriptor 00:25:13.236 [2024-10-16 07:08:08.305220] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:13.236 Running I/O for 1 seconds... 00:25:13.236 12894.00 IOPS, 50.37 MiB/s 00:25:13.236 Latency(us) 00:25:13.236 [2024-10-16T05:08:12.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.236 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:13.236 Verification LBA range: start 0x0 length 0x4000 00:25:13.236 NVMe0n1 : 1.01 12950.59 50.59 0.00 0.00 9847.76 2225.49 13926.40 00:25:13.236 [2024-10-16T05:08:12.735Z] =================================================================================================================== 00:25:13.236 [2024-10-16T05:08:12.735Z] Total : 12950.59 50.59 0.00 0.00 9847.76 2225.49 13926.40 00:25:13.236 07:08:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.236 07:08:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:13.497 07:08:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.497 07:08:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.497 07:08:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:13.758 07:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.018 07:08:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3238632 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3238632 ']' 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3238632 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3238632 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3238632' 00:25:17.323 killing process with pid 3238632 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3238632 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3238632 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:17.323 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.584 rmmod nvme_tcp 00:25:17.584 rmmod nvme_fabrics 00:25:17.584 rmmod nvme_keyring 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 3234913 ']' 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 3234913 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3234913 ']' 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3234913 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.584 07:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3234913 00:25:17.584 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:17.584 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:17.584 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3234913' 00:25:17.584 killing process with pid 3234913 00:25:17.584 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3234913 00:25:17.584 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3234913 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.845 07:08:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.760 07:08:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:19.760 00:25:19.760 real 0m39.686s 00:25:19.760 user 2m1.461s 00:25:19.760 sys 0m8.615s 00:25:19.760 07:08:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:19.760 07:08:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:19.760 ************************************ 00:25:19.760 END TEST nvmf_failover 00:25:19.760 ************************************ 00:25:19.760 07:08:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:19.760 07:08:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:19.760 07:08:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:19.760 07:08:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.022 ************************************ 00:25:20.022 START TEST nvmf_host_discovery 00:25:20.022 ************************************ 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:20.022 * Looking for test storage... 00:25:20.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:20.022 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.023 --rc genhtml_branch_coverage=1 00:25:20.023 --rc genhtml_function_coverage=1 00:25:20.023 --rc genhtml_legend=1 00:25:20.023 --rc geninfo_all_blocks=1 00:25:20.023 --rc geninfo_unexecuted_blocks=1 00:25:20.023 00:25:20.023 ' 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.023 --rc genhtml_branch_coverage=1 00:25:20.023 --rc genhtml_function_coverage=1 00:25:20.023 --rc genhtml_legend=1 00:25:20.023 --rc geninfo_all_blocks=1 00:25:20.023 --rc geninfo_unexecuted_blocks=1 00:25:20.023 00:25:20.023 ' 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.023 --rc genhtml_branch_coverage=1 00:25:20.023 --rc genhtml_function_coverage=1 00:25:20.023 --rc genhtml_legend=1 00:25:20.023 --rc geninfo_all_blocks=1 00:25:20.023 --rc geninfo_unexecuted_blocks=1 00:25:20.023 00:25:20.023 ' 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.023 --rc genhtml_branch_coverage=1 00:25:20.023 --rc genhtml_function_coverage=1 00:25:20.023 --rc genhtml_legend=1 00:25:20.023 --rc geninfo_all_blocks=1 00:25:20.023 --rc geninfo_unexecuted_blocks=1 00:25:20.023 00:25:20.023 ' 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.023 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.285 07:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.433 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.433 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:28.433 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:28.433 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:28.433 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:28.433 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:28.433 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:28.434 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:28.434 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:28.434 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:28.434 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:28.434 07:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:28.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:25:28.434 00:25:28.434 --- 10.0.0.2 ping statistics --- 00:25:28.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.434 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:25:28.434 00:25:28.434 --- 10.0.0.1 ping statistics --- 00:25:28.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.434 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=3244975 00:25:28.434 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 3244975 00:25:28.435 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:28.435 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3244975 ']' 00:25:28.435 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.435 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:28.435 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.435 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:28.435 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.435 [2024-10-16 07:08:27.129715] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:25:28.435 [2024-10-16 07:08:27.129788] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.435 [2024-10-16 07:08:27.217557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.435 [2024-10-16 07:08:27.268104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.435 [2024-10-16 07:08:27.268153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.435 [2024-10-16 07:08:27.268162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.435 [2024-10-16 07:08:27.268174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.435 [2024-10-16 07:08:27.268181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.435 [2024-10-16 07:08:27.268931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.696 [2024-10-16 07:08:27.987512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.696 07:08:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.696 [2024-10-16 07:08:27.999786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.696 null0 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.696 null1 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3245025 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3245025 /tmp/host.sock 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3245025 ']' 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:28.696 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:28.696 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.696 [2024-10-16 07:08:28.098404] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:25:28.696 [2024-10-16 07:08:28.098470] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245025 ] 00:25:28.696 [2024-10-16 07:08:28.169756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.958 [2024-10-16 07:08:28.224083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.530 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:29.531 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:29.531 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.531 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.531 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.531 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.531 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.531 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.531 07:08:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.793 [2024-10-16 07:08:29.251029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.793 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:30.055 07:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:30.628 [2024-10-16 07:08:29.975730] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:30.628 [2024-10-16 07:08:29.975752] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:30.628 [2024-10-16 07:08:29.975766] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:30.628 [2024-10-16 07:08:30.104182] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:30.889 [2024-10-16 07:08:30.167469] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:30.889 [2024-10-16 07:08:30.167489] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.151 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.413 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.413 [2024-10-16 07:08:30.911435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:31.413 [2024-10-16 07:08:30.912313] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:31.413 [2024-10-16 07:08:30.912339] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.675 07:08:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:31.675 [2024-10-16 07:08:31.038174] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:31.675 07:08:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:31.936 [2024-10-16 07:08:31.303631] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:31.936 [2024-10-16 07:08:31.303649] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:31.936 [2024-10-16 07:08:31.303659] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.881 [2024-10-16 07:08:32.191064] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:32.881 [2024-10-16 07:08:32.191083] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.881 [2024-10-16 07:08:32.196325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.881 [2024-10-16 07:08:32.196339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.881 [2024-10-16 07:08:32.196346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.881 [2024-10-16 07:08:32.196351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.881 [2024-10-16 07:08:32.196357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.881 [2024-10-16 07:08:32.196363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.881 [2024-10-16 07:08:32.196369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.881 [2024-10-16 07:08:32.196374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.881 [2024-10-16 07:08:32.196379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x917ed0 is same with the state(6) to be set 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.881 [2024-10-16 07:08:32.206340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ed0 (9): Bad file descriptor 00:25:32.881 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.881 [2024-10-16 07:08:32.216374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:32.881 [2024-10-16 07:08:32.216672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.881 [2024-10-16 07:08:32.216683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x917ed0 with addr=10.0.0.2, port=4420 00:25:32.881 [2024-10-16 07:08:32.216689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x917ed0 is same with the state(6) to be set 00:25:32.881 [2024-10-16 07:08:32.216697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ed0 (9): Bad file descriptor 00:25:32.882 [2024-10-16 07:08:32.216705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.882 [2024-10-16 07:08:32.216710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.882 [2024-10-16 07:08:32.216719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.882 [2024-10-16 07:08:32.216729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.882 [2024-10-16 07:08:32.226422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:32.882 [2024-10-16 07:08:32.226717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.882 [2024-10-16 07:08:32.226726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x917ed0 with addr=10.0.0.2, port=4420 00:25:32.882 [2024-10-16 07:08:32.226731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x917ed0 is same with the state(6) to be set 00:25:32.882 [2024-10-16 07:08:32.226739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ed0 (9): Bad file descriptor 00:25:32.882 [2024-10-16 07:08:32.226746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.882 [2024-10-16 07:08:32.226751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.882 [2024-10-16 07:08:32.226756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.882 [2024-10-16 07:08:32.226763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.882 [2024-10-16 07:08:32.236467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:32.882 [2024-10-16 07:08:32.236767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.882 [2024-10-16 07:08:32.236776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x917ed0 with addr=10.0.0.2, port=4420 00:25:32.882 [2024-10-16 07:08:32.236782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x917ed0 is same with the state(6) to be set 00:25:32.882 [2024-10-16 07:08:32.236790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ed0 (9): Bad file descriptor 00:25:32.882 [2024-10-16 07:08:32.236797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.882 [2024-10-16 07:08:32.236801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.882 [2024-10-16 07:08:32.236806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.882 [2024-10-16 07:08:32.236814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.882 [2024-10-16 07:08:32.246513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:32.882 [2024-10-16 07:08:32.246810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.882 [2024-10-16 07:08:32.246819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x917ed0 with addr=10.0.0.2, port=4420 00:25:32.882 [2024-10-16 07:08:32.246825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x917ed0 is same with the state(6) to be set 00:25:32.882 [2024-10-16 07:08:32.246833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ed0 (9): Bad file descriptor 00:25:32.882 [2024-10-16 07:08:32.246840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.882 [2024-10-16 07:08:32.246848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.882 [2024-10-16 07:08:32.246853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.882 [2024-10-16 07:08:32.246861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.882 [2024-10-16 07:08:32.256558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.882 [2024-10-16 07:08:32.256859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.882 [2024-10-16 07:08:32.256869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x917ed0 with addr=10.0.0.2, port=4420 00:25:32.882 [2024-10-16 07:08:32.256875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x917ed0 is same with the state(6) to be set 00:25:32.882 [2024-10-16 07:08:32.256883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ed0 (9): Bad file descriptor 00:25:32.882 [2024-10-16 07:08:32.256892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.882 [2024-10-16 07:08:32.256897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.882 [2024-10-16 07:08:32.256902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.882 [2024-10-16 07:08:32.256909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.882 [2024-10-16 07:08:32.266607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:32.882 [2024-10-16 07:08:32.267045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.882 [2024-10-16 07:08:32.267075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x917ed0 with addr=10.0.0.2, port=4420 00:25:32.882 [2024-10-16 07:08:32.267084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x917ed0 is same with the state(6) to be set 00:25:32.882 [2024-10-16 07:08:32.267098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ed0 (9): Bad file descriptor 00:25:32.882 [2024-10-16 07:08:32.267116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.882 [2024-10-16 07:08:32.267121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.882 [2024-10-16 07:08:32.267127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.882 [2024-10-16 07:08:32.267138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.882 [2024-10-16 07:08:32.276656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:32.882 [2024-10-16 07:08:32.277066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.882 [2024-10-16 07:08:32.277097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x917ed0 with addr=10.0.0.2, port=4420 00:25:32.882 [2024-10-16 07:08:32.277109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x917ed0 is same with the state(6) to be set 00:25:32.882 [2024-10-16 07:08:32.277124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ed0 (9): Bad file descriptor 00:25:32.882 [2024-10-16 07:08:32.277133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.882 [2024-10-16 07:08:32.277139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.882 [2024-10-16 07:08:32.277145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.882 [2024-10-16 07:08:32.277158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.882 [2024-10-16 07:08:32.278649] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:32.882 [2024-10-16 07:08:32.278662] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.882 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.883 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.145 07:08:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.529 [2024-10-16 07:08:33.627030] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:34.529 [2024-10-16 07:08:33.627045] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:34.529 [2024-10-16 07:08:33.627054] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.529 [2024-10-16 07:08:33.715305] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:34.529 [2024-10-16 07:08:34.027835] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:34.529 [2024-10-16 07:08:34.027867] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:34.529 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.811 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.811 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:34.811 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.811 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:34.811 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.811 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:34.811 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.811 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.812 request: 00:25:34.812 { 00:25:34.812 "name": "nvme", 00:25:34.812 "trtype": "tcp", 00:25:34.812 "traddr": "10.0.0.2", 00:25:34.812 "adrfam": "ipv4", 00:25:34.812 "trsvcid": "8009", 00:25:34.812 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:34.812 "wait_for_attach": true, 00:25:34.812 "method": "bdev_nvme_start_discovery", 00:25:34.812 "req_id": 1 00:25:34.812 } 00:25:34.812 Got JSON-RPC error response 00:25:34.812 response: 00:25:34.812 { 00:25:34.812 "code": -17, 00:25:34.812 "message": "File exists" 00:25:34.812 } 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.812 request: 00:25:34.812 { 00:25:34.812 "name": "nvme_second", 00:25:34.812 "trtype": "tcp", 00:25:34.812 "traddr": "10.0.0.2", 00:25:34.812 "adrfam": "ipv4", 00:25:34.812 "trsvcid": "8009", 00:25:34.812 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:34.812 "wait_for_attach": true, 00:25:34.812 "method": "bdev_nvme_start_discovery", 00:25:34.812 "req_id": 1 00:25:34.812 } 00:25:34.812 Got JSON-RPC error response 00:25:34.812 response: 00:25:34.812 { 00:25:34.812 "code": -17, 00:25:34.812 "message": "File exists" 00:25:34.812 } 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.812 07:08:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.195 [2024-10-16 07:08:35.291261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.195 [2024-10-16 07:08:35.291283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914ae0 with addr=10.0.0.2, port=8010 00:25:36.195 [2024-10-16 07:08:35.291293] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:36.195 [2024-10-16 07:08:35.291298] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:36.195 [2024-10-16 07:08:35.291303] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:37.136 [2024-10-16 07:08:36.293639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.136 [2024-10-16 07:08:36.293657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x914ae0 with addr=10.0.0.2, port=8010 00:25:37.136 [2024-10-16 07:08:36.293665] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:37.136 [2024-10-16 07:08:36.293670] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:37.136 [2024-10-16 07:08:36.293675] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:38.079 [2024-10-16 07:08:37.295643] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:38.079 request: 00:25:38.079 { 00:25:38.079 "name": "nvme_second", 00:25:38.079 "trtype": "tcp", 00:25:38.079 "traddr": "10.0.0.2", 00:25:38.079 "adrfam": "ipv4", 00:25:38.079 "trsvcid": "8010", 00:25:38.079 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:38.079 "wait_for_attach": false, 00:25:38.079 "attach_timeout_ms": 3000, 00:25:38.079 "method": "bdev_nvme_start_discovery", 00:25:38.079 "req_id": 1 00:25:38.079 } 00:25:38.079 Got JSON-RPC error response 00:25:38.079 response: 00:25:38.079 { 00:25:38.079 "code": -110, 00:25:38.079 "message": "Connection timed out" 00:25:38.079 } 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3245025 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:38.079 rmmod nvme_tcp 00:25:38.079 rmmod nvme_fabrics 00:25:38.079 rmmod nvme_keyring 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 3244975 ']' 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 3244975 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3244975 ']' 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3244975 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3244975 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3244975' 00:25:38.079 killing process with pid 3244975 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3244975 00:25:38.079 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3244975 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.340 07:08:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.253 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:40.253 00:25:40.253 real 0m20.387s 00:25:40.253 user 0m23.654s 00:25:40.253 sys 0m7.247s 00:25:40.254 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:40.254 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.254 ************************************ 00:25:40.254 END TEST nvmf_host_discovery 00:25:40.254 ************************************ 00:25:40.254 07:08:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:40.254 07:08:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:40.254 07:08:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:40.254 07:08:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.516 ************************************ 00:25:40.516 START TEST nvmf_host_multipath_status 00:25:40.516 ************************************ 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:40.516 * Looking for test storage... 00:25:40.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:40.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.516 --rc genhtml_branch_coverage=1 00:25:40.516 --rc genhtml_function_coverage=1 00:25:40.516 --rc genhtml_legend=1 00:25:40.516 --rc geninfo_all_blocks=1 00:25:40.516 --rc geninfo_unexecuted_blocks=1 00:25:40.516 00:25:40.516 ' 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:40.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.516 --rc genhtml_branch_coverage=1 00:25:40.516 --rc genhtml_function_coverage=1 00:25:40.516 --rc genhtml_legend=1 00:25:40.516 --rc geninfo_all_blocks=1 00:25:40.516 --rc geninfo_unexecuted_blocks=1 00:25:40.516 00:25:40.516 ' 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:40.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.516 --rc genhtml_branch_coverage=1 00:25:40.516 --rc genhtml_function_coverage=1 00:25:40.516 --rc genhtml_legend=1 00:25:40.516 --rc geninfo_all_blocks=1 00:25:40.516 --rc geninfo_unexecuted_blocks=1 00:25:40.516 00:25:40.516 ' 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:40.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.516 --rc genhtml_branch_coverage=1 00:25:40.516 --rc genhtml_function_coverage=1 00:25:40.516 --rc genhtml_legend=1 00:25:40.516 --rc geninfo_all_blocks=1 00:25:40.516 --rc geninfo_unexecuted_blocks=1 00:25:40.516 00:25:40.516 ' 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.516 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:40.517 07:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:40.517 07:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:48.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:48.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:48.663 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:48.663 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.663 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:48.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:25:48.664 00:25:48.664 --- 10.0.0.2 ping statistics --- 00:25:48.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.664 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:25:48.664 00:25:48.664 --- 10.0.0.1 ping statistics --- 00:25:48.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.664 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=3251187 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 3251187 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3251187 ']' 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.664 07:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.664 [2024-10-16 07:08:47.545642] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:25:48.664 [2024-10-16 07:08:47.545709] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.664 [2024-10-16 07:08:47.632484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:48.664 [2024-10-16 07:08:47.684809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.664 [2024-10-16 07:08:47.684870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.664 [2024-10-16 07:08:47.684880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.664 [2024-10-16 07:08:47.684888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.664 [2024-10-16 07:08:47.684894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.664 [2024-10-16 07:08:47.686523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.664 [2024-10-16 07:08:47.686529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.924 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:48.924 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:48.924 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:48.924 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:48.924 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.924 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.924 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3251187 00:25:48.924 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:49.185 [2024-10-16 07:08:48.562129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.185 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:49.447 Malloc0 00:25:49.447 07:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:49.709 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.970 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.970 [2024-10-16 07:08:49.389132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.970 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:50.232 [2024-10-16 07:08:49.589579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:50.232 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3251579 00:25:50.232 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:50.232 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:50.232 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3251579 /var/tmp/bdevperf.sock 00:25:50.232 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3251579 ']' 00:25:50.232 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.232 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.232 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.233 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.233 07:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.175 07:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.175 07:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:51.175 07:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:51.436 07:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:51.696 Nvme0n1 00:25:51.696 07:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:51.957 Nvme0n1 00:25:51.957 07:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:51.957 07:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:54.504 07:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:54.504 07:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:54.504 07:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:54.504 07:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:55.446 07:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:55.446 07:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.446 07:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.446 07:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.707 07:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.707 07:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:55.707 07:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.707 07:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.707 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.707 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.707 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.707 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.967 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.967 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.967 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.967 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.227 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.227 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.227 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.227 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.227 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.227 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.227 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.227 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.488 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.488 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:56.488 07:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.748 07:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:56.748 07:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.186 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.488 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.488 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.488 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.488 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.488 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.488 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.488 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.488 07:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.765 07:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.765 07:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:58.765 07:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.765 07:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.025 07:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.025 07:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:59.025 07:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.025 07:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:59.285 07:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:00.225 07:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:00.225 07:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.225 07:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.225 07:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.486 07:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.486 07:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:00.486 07:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.486 07:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.746 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.746 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.746 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.746 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.006 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.006 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.006 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.006 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.006 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.006 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.006 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.006 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.267 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.267 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.267 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.267 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.527 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.527 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:01.527 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.527 07:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:01.788 07:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:02.727 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:02.727 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.728 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.728 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.988 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.988 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:02.988 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.988 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.248 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.248 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.248 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.248 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.248 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.248 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.248 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.248 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.507 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.507 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.507 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.507 07:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.768 07:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.768 07:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:03.768 07:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.768 07:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.768 07:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.768 07:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:03.768 07:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:04.028 07:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:04.288 07:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:05.228 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:05.228 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:05.228 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.228 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.488 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.488 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.488 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.488 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.748 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.748 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.748 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.748 07:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.748 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.748 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.748 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.748 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.008 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.008 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:06.008 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.008 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.268 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.268 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:06.268 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.268 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.268 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.268 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:06.268 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:06.528 07:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.789 07:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:07.729 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:07.729 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.729 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.729 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.988 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.988 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:07.988 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.988 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.988 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.988 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.988 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.988 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.247 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.247 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.247 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.247 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.507 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.507 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:08.507 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.507 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.507 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.507 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.507 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.507 07:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.767 07:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.767 07:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:09.027 07:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:09.027 07:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:09.287 07:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.287 07:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:10.227 07:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:10.227 07:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.227 07:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.227 07:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.488 07:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.488 07:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.488 07:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.488 07:09:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.748 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.748 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.748 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.748 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.009 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.009 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.009 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.009 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.009 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.009 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.009 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.009 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.269 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.269 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.269 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.269 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.530 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.530 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:11.530 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.530 07:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:11.791 07:09:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:12.732 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:12.732 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:12.732 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.732 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.993 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.993 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:12.993 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.993 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.254 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.254 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.254 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.254 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.254 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.254 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.254 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.254 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.515 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.515 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.515 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.515 07:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.775 07:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.775 07:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:13.775 07:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.775 07:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.035 07:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.035 07:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:14.035 07:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.035 07:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:14.295 07:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:15.235 07:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:15.235 07:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:15.235 07:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.235 07:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.495 07:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.495 07:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:15.495 07:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.495 07:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.755 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.755 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.755 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.755 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.755 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.755 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.755 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.755 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.015 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.015 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.015 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.015 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.275 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.275 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:16.275 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.275 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.536 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.536 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:16.536 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:16.536 07:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:16.796 07:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:17.736 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:17.736 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:17.736 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.736 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.995 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.995 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:17.995 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.995 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.255 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.255 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.255 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.255 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.255 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.255 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.516 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.516 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.516 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.516 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.516 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.516 07:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.777 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.777 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:18.777 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.777 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3251579 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3251579 ']' 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3251579 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3251579 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3251579' 00:26:19.041 killing process with pid 3251579 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3251579 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3251579 00:26:19.041 { 00:26:19.041 "results": [ 00:26:19.041 { 00:26:19.041 "job": "Nvme0n1", 00:26:19.041 "core_mask": "0x4", 00:26:19.041 "workload": "verify", 00:26:19.041 "status": "terminated", 00:26:19.041 "verify_range": { 00:26:19.041 "start": 0, 00:26:19.041 "length": 16384 00:26:19.041 }, 00:26:19.041 "queue_depth": 128, 00:26:19.041 "io_size": 4096, 00:26:19.041 "runtime": 26.84794, 00:26:19.041 "iops": 11996.823592424596, 00:26:19.041 "mibps": 46.86259215790858, 00:26:19.041 "io_failed": 0, 00:26:19.041 "io_timeout": 0, 00:26:19.041 "avg_latency_us": 10650.223071895018, 00:26:19.041 "min_latency_us": 546.1333333333333, 00:26:19.041 "max_latency_us": 3019898.88 00:26:19.041 } 00:26:19.041 ], 00:26:19.041 "core_count": 1 00:26:19.041 } 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3251579 00:26:19.041 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.041 [2024-10-16 07:08:49.669902] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:26:19.041 [2024-10-16 07:08:49.669987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251579 ] 00:26:19.041 [2024-10-16 07:08:49.754731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.041 [2024-10-16 07:08:49.805925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.041 Running I/O for 90 seconds... 00:26:19.041 10865.00 IOPS, 42.44 MiB/s [2024-10-16T05:09:18.540Z] 10963.00 IOPS, 42.82 MiB/s [2024-10-16T05:09:18.540Z] 11087.00 IOPS, 43.31 MiB/s [2024-10-16T05:09:18.540Z] 11386.25 IOPS, 44.48 MiB/s [2024-10-16T05:09:18.540Z] 11679.00 IOPS, 45.62 MiB/s [2024-10-16T05:09:18.540Z] 11878.00 IOPS, 46.40 MiB/s [2024-10-16T05:09:18.540Z] 11989.14 IOPS, 46.83 MiB/s [2024-10-16T05:09:18.540Z] 12104.75 IOPS, 47.28 MiB/s [2024-10-16T05:09:18.540Z] 12187.67 IOPS, 47.61 MiB/s [2024-10-16T05:09:18.540Z] 12277.70 IOPS, 47.96 MiB/s [2024-10-16T05:09:18.540Z] 12326.82 IOPS, 48.15 MiB/s [2024-10-16T05:09:18.540Z] [2024-10-16 07:09:03.411815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.042 [2024-10-16 07:09:03.411853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.411887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.042 [2024-10-16 07:09:03.411894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.411905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.042 [2024-10-16 07:09:03.411911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.411921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.042 [2024-10-16 07:09:03.411927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.411937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.042 [2024-10-16 07:09:03.411943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.042 [2024-10-16 07:09:03.412630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.412986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.412997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.413176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.413182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.414042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.414051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.414064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.042 [2024-10-16 07:09:03.414070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:19.042 [2024-10-16 07:09:03.414083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:19.043 [2024-10-16 07:09:03.414874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.043 [2024-10-16 07:09:03.414880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.414893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.414900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.414913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.414919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.414932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.414938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.414951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.414956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.414969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.414974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.414988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.414993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.044 [2024-10-16 07:09:03.415702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.044 [2024-10-16 07:09:03.415707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.044 12303.08 IOPS, 48.06 MiB/s [2024-10-16T05:09:18.543Z] 11356.69 IOPS, 44.36 MiB/s [2024-10-16T05:09:18.543Z] 10545.50 IOPS, 41.19 MiB/s [2024-10-16T05:09:18.543Z] 9923.40 IOPS, 38.76 MiB/s [2024-10-16T05:09:18.543Z] 10105.19 IOPS, 39.47 MiB/s [2024-10-16T05:09:18.543Z] 10272.00 IOPS, 40.12 MiB/s [2024-10-16T05:09:18.543Z] 10644.17 IOPS, 41.58 MiB/s [2024-10-16T05:09:18.543Z] 10977.47 IOPS, 42.88 MiB/s [2024-10-16T05:09:18.543Z] 11185.10 IOPS, 43.69 MiB/s [2024-10-16T05:09:18.543Z] 11266.14 IOPS, 44.01 MiB/s [2024-10-16T05:09:18.543Z] 11345.09 IOPS, 44.32 MiB/s [2024-10-16T05:09:18.543Z] 11552.00 IOPS, 45.12 MiB/s [2024-10-16T05:09:18.543Z] 11785.50 IOPS, 46.04 MiB/s [2024-10-16T05:09:18.543Z] [2024-10-16 07:09:16.142964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.044 [2024-10-16 07:09:16.143001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.143361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.045 [2024-10-16 07:09:16.143378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.143389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.045 [2024-10-16 07:09:16.143393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.045 [2024-10-16 07:09:16.144226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:19.045 [2024-10-16 07:09:16.144237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-10-16 07:09:16.144444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-10-16 07:09:16.144460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-10-16 07:09:16.144475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-10-16 07:09:16.144491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-10-16 07:09:16.144507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-10-16 07:09:16.144618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-10-16 07:09:16.144634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-10-16 07:09:16.144649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-10-16 07:09:16.144665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:19.046 [2024-10-16 07:09:16.144676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.046 [2024-10-16 07:09:16.144681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:19.046 11929.92 IOPS, 46.60 MiB/s [2024-10-16T05:09:18.545Z] 11971.50 IOPS, 46.76 MiB/s [2024-10-16T05:09:18.545Z] Received shutdown signal, test time was about 26.848551 seconds 00:26:19.046 00:26:19.046 Latency(us) 00:26:19.046 [2024-10-16T05:09:18.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.046 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:19.046 Verification LBA range: start 0x0 length 0x4000 00:26:19.046 Nvme0n1 : 26.85 11996.82 46.86 0.00 0.00 10650.22 546.13 3019898.88 00:26:19.046 [2024-10-16T05:09:18.545Z] =================================================================================================================== 00:26:19.046 [2024-10-16T05:09:18.545Z] Total : 11996.82 46.86 0.00 0.00 10650.22 546.13 3019898.88 00:26:19.046 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.306 rmmod nvme_tcp 00:26:19.306 rmmod nvme_fabrics 00:26:19.306 rmmod nvme_keyring 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 3251187 ']' 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 3251187 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3251187 ']' 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3251187 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3251187 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3251187' 00:26:19.306 killing process with pid 3251187 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3251187 00:26:19.306 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3251187 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.569 07:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.113 07:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:22.113 00:26:22.113 real 0m41.231s 00:26:22.113 user 1m46.794s 00:26:22.113 sys 0m11.515s 00:26:22.113 07:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:22.113 07:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.113 ************************************ 00:26:22.113 END TEST nvmf_host_multipath_status 00:26:22.113 ************************************ 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.113 ************************************ 00:26:22.113 START TEST nvmf_discovery_remove_ifc 00:26:22.113 ************************************ 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:22.113 * Looking for test storage... 00:26:22.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:22.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.113 --rc genhtml_branch_coverage=1 00:26:22.113 --rc genhtml_function_coverage=1 00:26:22.113 --rc genhtml_legend=1 00:26:22.113 --rc geninfo_all_blocks=1 00:26:22.113 --rc geninfo_unexecuted_blocks=1 00:26:22.113 00:26:22.113 ' 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:22.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.113 --rc genhtml_branch_coverage=1 00:26:22.113 --rc genhtml_function_coverage=1 00:26:22.113 --rc genhtml_legend=1 00:26:22.113 --rc geninfo_all_blocks=1 00:26:22.113 --rc geninfo_unexecuted_blocks=1 00:26:22.113 00:26:22.113 ' 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:22.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.113 --rc genhtml_branch_coverage=1 00:26:22.113 --rc genhtml_function_coverage=1 00:26:22.113 --rc genhtml_legend=1 00:26:22.113 --rc geninfo_all_blocks=1 00:26:22.113 --rc geninfo_unexecuted_blocks=1 00:26:22.113 00:26:22.113 ' 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:22.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.113 --rc genhtml_branch_coverage=1 00:26:22.113 --rc genhtml_function_coverage=1 00:26:22.113 --rc genhtml_legend=1 00:26:22.113 --rc geninfo_all_blocks=1 00:26:22.113 --rc geninfo_unexecuted_blocks=1 00:26:22.113 00:26:22.113 ' 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.113 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:22.114 07:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:30.257 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:30.258 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:30.258 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:30.258 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:30.258 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:30.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:26:30.258 00:26:30.258 --- 10.0.0.2 ping statistics --- 00:26:30.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.258 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:26:30.258 00:26:30.258 --- 10.0.0.1 ping statistics --- 00:26:30.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.258 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=3262209 00:26:30.258 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 3262209 00:26:30.259 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:30.259 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3262209 ']' 00:26:30.259 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.259 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.259 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.259 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.259 07:09:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.259 [2024-10-16 07:09:28.865757] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:26:30.259 [2024-10-16 07:09:28.865827] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.259 [2024-10-16 07:09:28.959004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.259 [2024-10-16 07:09:29.009661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.259 [2024-10-16 07:09:29.009716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.259 [2024-10-16 07:09:29.009725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.259 [2024-10-16 07:09:29.009732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.259 [2024-10-16 07:09:29.009739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.259 [2024-10-16 07:09:29.010505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.259 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:30.259 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:30.259 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:30.259 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.259 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.259 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.259 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:30.259 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.259 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.259 [2024-10-16 07:09:29.755853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.521 [2024-10-16 07:09:29.764130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:30.521 null0 00:26:30.521 [2024-10-16 07:09:29.796053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3262363 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3262363 /tmp/host.sock 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3262363 ']' 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:30.521 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.521 07:09:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.521 [2024-10-16 07:09:29.872134] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:26:30.521 [2024-10-16 07:09:29.872198] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262363 ] 00:26:30.521 [2024-10-16 07:09:29.951495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.521 [2024-10-16 07:09:30.006374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.465 07:09:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.407 [2024-10-16 07:09:31.825471] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:32.407 [2024-10-16 07:09:31.825495] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:32.407 [2024-10-16 07:09:31.825510] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:32.668 [2024-10-16 07:09:31.952925] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:32.668 [2024-10-16 07:09:32.137835] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:32.668 [2024-10-16 07:09:32.137893] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:32.668 [2024-10-16 07:09:32.137920] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:32.668 [2024-10-16 07:09:32.137933] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:32.668 [2024-10-16 07:09:32.137953] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.668 [2024-10-16 07:09:32.143777] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1612450 was disconnected and freed. delete nvme_qpair. 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.668 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.929 07:09:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.871 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.871 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.871 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.871 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.871 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.871 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.871 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.131 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.131 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.131 07:09:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:35.072 07:09:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.013 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.013 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.013 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.013 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.013 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.013 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.013 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.013 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.273 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.273 07:09:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.214 07:09:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.154 [2024-10-16 07:09:37.578424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:38.154 [2024-10-16 07:09:37.578460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.154 [2024-10-16 07:09:37.578470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.154 [2024-10-16 07:09:37.578481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.154 [2024-10-16 07:09:37.578486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.154 [2024-10-16 07:09:37.578492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.154 [2024-10-16 07:09:37.578498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.154 [2024-10-16 07:09:37.578503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.155 [2024-10-16 07:09:37.578509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.155 [2024-10-16 07:09:37.578515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.155 [2024-10-16 07:09:37.578520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.155 [2024-10-16 07:09:37.578526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15eecc0 is same with the state(6) to be set 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.155 [2024-10-16 07:09:37.588449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15eecc0 (9): Bad file descriptor 00:26:38.155 [2024-10-16 07:09:37.598484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:38.155 07:09:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.538 [2024-10-16 07:09:38.616901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:39.538 [2024-10-16 07:09:38.616991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15eecc0 with addr=10.0.0.2, port=4420 00:26:39.538 [2024-10-16 07:09:38.617023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15eecc0 is same with the state(6) to be set 00:26:39.538 [2024-10-16 07:09:38.617077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15eecc0 (9): Bad file descriptor 00:26:39.538 [2024-10-16 07:09:38.617186] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:39.538 [2024-10-16 07:09:38.617240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:39.539 [2024-10-16 07:09:38.617262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:39.539 [2024-10-16 07:09:38.617286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:39.539 [2024-10-16 07:09:38.617329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.539 [2024-10-16 07:09:38.617363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.539 07:09:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.482 [2024-10-16 07:09:39.619767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.482 [2024-10-16 07:09:39.619785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.482 [2024-10-16 07:09:39.619791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.482 [2024-10-16 07:09:39.619796] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:40.482 [2024-10-16 07:09:39.619806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.482 [2024-10-16 07:09:39.619820] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:40.482 [2024-10-16 07:09:39.619836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.482 [2024-10-16 07:09:39.619847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.482 [2024-10-16 07:09:39.619854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.482 [2024-10-16 07:09:39.619860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.482 [2024-10-16 07:09:39.619866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.482 [2024-10-16 07:09:39.619871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.482 [2024-10-16 07:09:39.619877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.482 [2024-10-16 07:09:39.619882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.482 [2024-10-16 07:09:39.619889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.482 [2024-10-16 07:09:39.619894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.482 [2024-10-16 07:09:39.619899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:40.482 [2024-10-16 07:09:39.620585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15de400 (9): Bad file descriptor 00:26:40.482 [2024-10-16 07:09:39.621595] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:40.482 [2024-10-16 07:09:39.621606] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:40.482 07:09:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.424 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.424 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.424 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.424 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.424 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.424 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.424 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.424 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.684 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:41.684 07:09:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.255 [2024-10-16 07:09:41.680767] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:42.255 [2024-10-16 07:09:41.680783] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:42.255 [2024-10-16 07:09:41.680793] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:42.515 [2024-10-16 07:09:41.810168] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:42.515 07:09:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.515 [2024-10-16 07:09:41.992784] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:42.515 [2024-10-16 07:09:41.992814] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:42.515 [2024-10-16 07:09:41.992829] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:42.515 [2024-10-16 07:09:41.992839] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:42.515 [2024-10-16 07:09:41.992850] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:42.515 [2024-10-16 07:09:41.998085] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15ea490 was disconnected and freed. delete nvme_qpair. 00:26:43.898 07:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.898 07:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.898 07:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.898 07:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.898 07:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.898 07:09:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3262363 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3262363 ']' 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3262363 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262363 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262363' 00:26:43.898 killing process with pid 3262363 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3262363 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3262363 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:43.898 rmmod nvme_tcp 00:26:43.898 rmmod nvme_fabrics 00:26:43.898 rmmod nvme_keyring 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 3262209 ']' 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 3262209 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3262209 ']' 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3262209 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262209 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262209' 00:26:43.898 killing process with pid 3262209 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3262209 00:26:43.898 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3262209 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.158 07:09:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.183 07:09:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:46.183 00:26:46.183 real 0m24.455s 00:26:46.183 user 0m29.555s 00:26:46.183 sys 0m7.242s 00:26:46.183 07:09:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:46.183 07:09:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.183 ************************************ 00:26:46.183 END TEST nvmf_discovery_remove_ifc 00:26:46.183 ************************************ 00:26:46.183 07:09:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:46.183 07:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:46.183 07:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:46.183 07:09:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.183 ************************************ 00:26:46.183 START TEST nvmf_identify_kernel_target 00:26:46.183 ************************************ 00:26:46.183 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:46.445 * Looking for test storage... 00:26:46.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:46.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.445 --rc genhtml_branch_coverage=1 00:26:46.445 --rc genhtml_function_coverage=1 00:26:46.445 --rc genhtml_legend=1 00:26:46.445 --rc geninfo_all_blocks=1 00:26:46.445 --rc geninfo_unexecuted_blocks=1 00:26:46.445 00:26:46.445 ' 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:46.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.445 --rc genhtml_branch_coverage=1 00:26:46.445 --rc genhtml_function_coverage=1 00:26:46.445 --rc genhtml_legend=1 00:26:46.445 --rc geninfo_all_blocks=1 00:26:46.445 --rc geninfo_unexecuted_blocks=1 00:26:46.445 00:26:46.445 ' 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:46.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.445 --rc genhtml_branch_coverage=1 00:26:46.445 --rc genhtml_function_coverage=1 00:26:46.445 --rc genhtml_legend=1 00:26:46.445 --rc geninfo_all_blocks=1 00:26:46.445 --rc geninfo_unexecuted_blocks=1 00:26:46.445 00:26:46.445 ' 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:46.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.445 --rc genhtml_branch_coverage=1 00:26:46.445 --rc genhtml_function_coverage=1 00:26:46.445 --rc genhtml_legend=1 00:26:46.445 --rc geninfo_all_blocks=1 00:26:46.445 --rc geninfo_unexecuted_blocks=1 00:26:46.445 00:26:46.445 ' 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.445 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:46.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:46.446 07:09:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:54.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:54.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.590 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:54.591 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:54.591 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.591 07:09:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:54.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:26:54.591 00:26:54.591 --- 10.0.0.2 ping statistics --- 00:26:54.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.591 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:26:54.591 00:26:54.591 --- 10.0.0.1 ping statistics --- 00:26:54.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.591 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.591 07:09:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:57.908 Waiting for block devices as requested 00:26:57.908 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:57.908 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:57.908 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:57.908 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:57.908 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:57.908 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:57.908 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:57.908 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:58.168 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:58.168 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:58.428 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:58.428 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:58.428 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:58.688 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:58.688 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:58.688 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:58.947 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:59.208 No valid GPT data, bailing 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:59.208 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:59.470 00:26:59.470 Discovery Log Number of Records 2, Generation counter 2 00:26:59.470 =====Discovery Log Entry 0====== 00:26:59.470 trtype: tcp 00:26:59.470 adrfam: ipv4 00:26:59.470 subtype: current discovery subsystem 00:26:59.470 treq: not specified, sq flow control disable supported 00:26:59.470 portid: 1 00:26:59.470 trsvcid: 4420 00:26:59.470 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:59.470 traddr: 10.0.0.1 00:26:59.470 eflags: none 00:26:59.470 sectype: none 00:26:59.470 =====Discovery Log Entry 1====== 00:26:59.470 trtype: tcp 00:26:59.470 adrfam: ipv4 00:26:59.470 subtype: nvme subsystem 00:26:59.470 treq: not specified, sq flow control disable supported 00:26:59.470 portid: 1 00:26:59.470 trsvcid: 4420 00:26:59.470 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:59.470 traddr: 10.0.0.1 00:26:59.470 eflags: none 00:26:59.470 sectype: none 00:26:59.470 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:59.470 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:59.470 ===================================================== 00:26:59.470 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:59.470 ===================================================== 00:26:59.470 Controller Capabilities/Features 00:26:59.470 ================================ 00:26:59.470 Vendor ID: 0000 00:26:59.470 Subsystem Vendor ID: 0000 00:26:59.470 Serial Number: 2ac8a8f0d9827dfaaf64 00:26:59.470 Model Number: Linux 00:26:59.470 Firmware Version: 6.8.9-20 00:26:59.470 Recommended Arb Burst: 0 00:26:59.470 IEEE OUI Identifier: 00 00 00 00:26:59.470 Multi-path I/O 00:26:59.470 May have multiple subsystem ports: No 00:26:59.470 May have multiple controllers: No 00:26:59.470 Associated with SR-IOV VF: No 00:26:59.470 Max Data Transfer Size: Unlimited 00:26:59.470 Max Number of Namespaces: 0 00:26:59.470 Max Number of I/O Queues: 1024 00:26:59.470 NVMe Specification Version (VS): 1.3 00:26:59.470 NVMe Specification Version (Identify): 1.3 00:26:59.470 Maximum Queue Entries: 1024 00:26:59.470 Contiguous Queues Required: No 00:26:59.470 Arbitration Mechanisms Supported 00:26:59.470 Weighted Round Robin: Not Supported 00:26:59.470 Vendor Specific: Not Supported 00:26:59.470 Reset Timeout: 7500 ms 00:26:59.470 Doorbell Stride: 4 bytes 00:26:59.470 NVM Subsystem Reset: Not Supported 00:26:59.470 Command Sets Supported 00:26:59.470 NVM Command Set: Supported 00:26:59.470 Boot Partition: Not Supported 00:26:59.470 Memory Page Size Minimum: 4096 bytes 00:26:59.470 Memory Page Size Maximum: 4096 bytes 00:26:59.470 Persistent Memory Region: Not Supported 00:26:59.470 Optional Asynchronous Events Supported 00:26:59.470 Namespace Attribute Notices: Not Supported 00:26:59.470 Firmware Activation Notices: Not Supported 00:26:59.470 ANA Change Notices: Not Supported 00:26:59.470 PLE Aggregate Log Change Notices: Not Supported 00:26:59.470 LBA Status Info Alert Notices: Not Supported 00:26:59.470 EGE Aggregate Log Change Notices: Not Supported 00:26:59.470 Normal NVM Subsystem Shutdown event: Not Supported 00:26:59.470 Zone Descriptor Change Notices: Not Supported 00:26:59.470 Discovery Log Change Notices: Supported 00:26:59.470 Controller Attributes 00:26:59.470 128-bit Host Identifier: Not Supported 00:26:59.470 Non-Operational Permissive Mode: Not Supported 00:26:59.470 NVM Sets: Not Supported 00:26:59.470 Read Recovery Levels: Not Supported 00:26:59.470 Endurance Groups: Not Supported 00:26:59.470 Predictable Latency Mode: Not Supported 00:26:59.470 Traffic Based Keep ALive: Not Supported 00:26:59.470 Namespace Granularity: Not Supported 00:26:59.470 SQ Associations: Not Supported 00:26:59.470 UUID List: Not Supported 00:26:59.470 Multi-Domain Subsystem: Not Supported 00:26:59.470 Fixed Capacity Management: Not Supported 00:26:59.470 Variable Capacity Management: Not Supported 00:26:59.470 Delete Endurance Group: Not Supported 00:26:59.470 Delete NVM Set: Not Supported 00:26:59.470 Extended LBA Formats Supported: Not Supported 00:26:59.470 Flexible Data Placement Supported: Not Supported 00:26:59.470 00:26:59.470 Controller Memory Buffer Support 00:26:59.470 ================================ 00:26:59.470 Supported: No 00:26:59.470 00:26:59.470 Persistent Memory Region Support 00:26:59.470 ================================ 00:26:59.470 Supported: No 00:26:59.470 00:26:59.470 Admin Command Set Attributes 00:26:59.470 ============================ 00:26:59.470 Security Send/Receive: Not Supported 00:26:59.470 Format NVM: Not Supported 00:26:59.470 Firmware Activate/Download: Not Supported 00:26:59.470 Namespace Management: Not Supported 00:26:59.470 Device Self-Test: Not Supported 00:26:59.470 Directives: Not Supported 00:26:59.470 NVMe-MI: Not Supported 00:26:59.470 Virtualization Management: Not Supported 00:26:59.470 Doorbell Buffer Config: Not Supported 00:26:59.470 Get LBA Status Capability: Not Supported 00:26:59.470 Command & Feature Lockdown Capability: Not Supported 00:26:59.470 Abort Command Limit: 1 00:26:59.470 Async Event Request Limit: 1 00:26:59.470 Number of Firmware Slots: N/A 00:26:59.470 Firmware Slot 1 Read-Only: N/A 00:26:59.470 Firmware Activation Without Reset: N/A 00:26:59.470 Multiple Update Detection Support: N/A 00:26:59.470 Firmware Update Granularity: No Information Provided 00:26:59.470 Per-Namespace SMART Log: No 00:26:59.470 Asymmetric Namespace Access Log Page: Not Supported 00:26:59.470 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:59.470 Command Effects Log Page: Not Supported 00:26:59.470 Get Log Page Extended Data: Supported 00:26:59.470 Telemetry Log Pages: Not Supported 00:26:59.470 Persistent Event Log Pages: Not Supported 00:26:59.470 Supported Log Pages Log Page: May Support 00:26:59.470 Commands Supported & Effects Log Page: Not Supported 00:26:59.470 Feature Identifiers & Effects Log Page:May Support 00:26:59.470 NVMe-MI Commands & Effects Log Page: May Support 00:26:59.470 Data Area 4 for Telemetry Log: Not Supported 00:26:59.470 Error Log Page Entries Supported: 1 00:26:59.470 Keep Alive: Not Supported 00:26:59.470 00:26:59.470 NVM Command Set Attributes 00:26:59.470 ========================== 00:26:59.470 Submission Queue Entry Size 00:26:59.471 Max: 1 00:26:59.471 Min: 1 00:26:59.471 Completion Queue Entry Size 00:26:59.471 Max: 1 00:26:59.471 Min: 1 00:26:59.471 Number of Namespaces: 0 00:26:59.471 Compare Command: Not Supported 00:26:59.471 Write Uncorrectable Command: Not Supported 00:26:59.471 Dataset Management Command: Not Supported 00:26:59.471 Write Zeroes Command: Not Supported 00:26:59.471 Set Features Save Field: Not Supported 00:26:59.471 Reservations: Not Supported 00:26:59.471 Timestamp: Not Supported 00:26:59.471 Copy: Not Supported 00:26:59.471 Volatile Write Cache: Not Present 00:26:59.471 Atomic Write Unit (Normal): 1 00:26:59.471 Atomic Write Unit (PFail): 1 00:26:59.471 Atomic Compare & Write Unit: 1 00:26:59.471 Fused Compare & Write: Not Supported 00:26:59.471 Scatter-Gather List 00:26:59.471 SGL Command Set: Supported 00:26:59.471 SGL Keyed: Not Supported 00:26:59.471 SGL Bit Bucket Descriptor: Not Supported 00:26:59.471 SGL Metadata Pointer: Not Supported 00:26:59.471 Oversized SGL: Not Supported 00:26:59.471 SGL Metadata Address: Not Supported 00:26:59.471 SGL Offset: Supported 00:26:59.471 Transport SGL Data Block: Not Supported 00:26:59.471 Replay Protected Memory Block: Not Supported 00:26:59.471 00:26:59.471 Firmware Slot Information 00:26:59.471 ========================= 00:26:59.471 Active slot: 0 00:26:59.471 00:26:59.471 00:26:59.471 Error Log 00:26:59.471 ========= 00:26:59.471 00:26:59.471 Active Namespaces 00:26:59.471 ================= 00:26:59.471 Discovery Log Page 00:26:59.471 ================== 00:26:59.471 Generation Counter: 2 00:26:59.471 Number of Records: 2 00:26:59.471 Record Format: 0 00:26:59.471 00:26:59.471 Discovery Log Entry 0 00:26:59.471 ---------------------- 00:26:59.471 Transport Type: 3 (TCP) 00:26:59.471 Address Family: 1 (IPv4) 00:26:59.471 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:59.471 Entry Flags: 00:26:59.471 Duplicate Returned Information: 0 00:26:59.471 Explicit Persistent Connection Support for Discovery: 0 00:26:59.471 Transport Requirements: 00:26:59.471 Secure Channel: Not Specified 00:26:59.471 Port ID: 1 (0x0001) 00:26:59.471 Controller ID: 65535 (0xffff) 00:26:59.471 Admin Max SQ Size: 32 00:26:59.471 Transport Service Identifier: 4420 00:26:59.471 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:59.471 Transport Address: 10.0.0.1 00:26:59.471 Discovery Log Entry 1 00:26:59.471 ---------------------- 00:26:59.471 Transport Type: 3 (TCP) 00:26:59.471 Address Family: 1 (IPv4) 00:26:59.471 Subsystem Type: 2 (NVM Subsystem) 00:26:59.471 Entry Flags: 00:26:59.471 Duplicate Returned Information: 0 00:26:59.471 Explicit Persistent Connection Support for Discovery: 0 00:26:59.471 Transport Requirements: 00:26:59.471 Secure Channel: Not Specified 00:26:59.471 Port ID: 1 (0x0001) 00:26:59.471 Controller ID: 65535 (0xffff) 00:26:59.471 Admin Max SQ Size: 32 00:26:59.471 Transport Service Identifier: 4420 00:26:59.471 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:59.471 Transport Address: 10.0.0.1 00:26:59.471 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:59.471 get_feature(0x01) failed 00:26:59.471 get_feature(0x02) failed 00:26:59.471 get_feature(0x04) failed 00:26:59.471 ===================================================== 00:26:59.471 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:59.471 ===================================================== 00:26:59.471 Controller Capabilities/Features 00:26:59.471 ================================ 00:26:59.471 Vendor ID: 0000 00:26:59.471 Subsystem Vendor ID: 0000 00:26:59.471 Serial Number: 9e6757f9749323ef8019 00:26:59.471 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:59.471 Firmware Version: 6.8.9-20 00:26:59.471 Recommended Arb Burst: 6 00:26:59.471 IEEE OUI Identifier: 00 00 00 00:26:59.471 Multi-path I/O 00:26:59.471 May have multiple subsystem ports: Yes 00:26:59.471 May have multiple controllers: Yes 00:26:59.471 Associated with SR-IOV VF: No 00:26:59.471 Max Data Transfer Size: Unlimited 00:26:59.471 Max Number of Namespaces: 1024 00:26:59.471 Max Number of I/O Queues: 128 00:26:59.471 NVMe Specification Version (VS): 1.3 00:26:59.471 NVMe Specification Version (Identify): 1.3 00:26:59.471 Maximum Queue Entries: 1024 00:26:59.471 Contiguous Queues Required: No 00:26:59.471 Arbitration Mechanisms Supported 00:26:59.471 Weighted Round Robin: Not Supported 00:26:59.471 Vendor Specific: Not Supported 00:26:59.471 Reset Timeout: 7500 ms 00:26:59.471 Doorbell Stride: 4 bytes 00:26:59.471 NVM Subsystem Reset: Not Supported 00:26:59.471 Command Sets Supported 00:26:59.471 NVM Command Set: Supported 00:26:59.471 Boot Partition: Not Supported 00:26:59.471 Memory Page Size Minimum: 4096 bytes 00:26:59.471 Memory Page Size Maximum: 4096 bytes 00:26:59.471 Persistent Memory Region: Not Supported 00:26:59.471 Optional Asynchronous Events Supported 00:26:59.471 Namespace Attribute Notices: Supported 00:26:59.471 Firmware Activation Notices: Not Supported 00:26:59.471 ANA Change Notices: Supported 00:26:59.471 PLE Aggregate Log Change Notices: Not Supported 00:26:59.471 LBA Status Info Alert Notices: Not Supported 00:26:59.471 EGE Aggregate Log Change Notices: Not Supported 00:26:59.471 Normal NVM Subsystem Shutdown event: Not Supported 00:26:59.471 Zone Descriptor Change Notices: Not Supported 00:26:59.471 Discovery Log Change Notices: Not Supported 00:26:59.471 Controller Attributes 00:26:59.471 128-bit Host Identifier: Supported 00:26:59.471 Non-Operational Permissive Mode: Not Supported 00:26:59.471 NVM Sets: Not Supported 00:26:59.471 Read Recovery Levels: Not Supported 00:26:59.471 Endurance Groups: Not Supported 00:26:59.471 Predictable Latency Mode: Not Supported 00:26:59.471 Traffic Based Keep ALive: Supported 00:26:59.471 Namespace Granularity: Not Supported 00:26:59.471 SQ Associations: Not Supported 00:26:59.471 UUID List: Not Supported 00:26:59.471 Multi-Domain Subsystem: Not Supported 00:26:59.471 Fixed Capacity Management: Not Supported 00:26:59.471 Variable Capacity Management: Not Supported 00:26:59.471 Delete Endurance Group: Not Supported 00:26:59.471 Delete NVM Set: Not Supported 00:26:59.471 Extended LBA Formats Supported: Not Supported 00:26:59.471 Flexible Data Placement Supported: Not Supported 00:26:59.471 00:26:59.471 Controller Memory Buffer Support 00:26:59.471 ================================ 00:26:59.471 Supported: No 00:26:59.471 00:26:59.471 Persistent Memory Region Support 00:26:59.471 ================================ 00:26:59.471 Supported: No 00:26:59.471 00:26:59.471 Admin Command Set Attributes 00:26:59.471 ============================ 00:26:59.471 Security Send/Receive: Not Supported 00:26:59.471 Format NVM: Not Supported 00:26:59.471 Firmware Activate/Download: Not Supported 00:26:59.471 Namespace Management: Not Supported 00:26:59.471 Device Self-Test: Not Supported 00:26:59.471 Directives: Not Supported 00:26:59.471 NVMe-MI: Not Supported 00:26:59.471 Virtualization Management: Not Supported 00:26:59.471 Doorbell Buffer Config: Not Supported 00:26:59.471 Get LBA Status Capability: Not Supported 00:26:59.471 Command & Feature Lockdown Capability: Not Supported 00:26:59.471 Abort Command Limit: 4 00:26:59.471 Async Event Request Limit: 4 00:26:59.471 Number of Firmware Slots: N/A 00:26:59.471 Firmware Slot 1 Read-Only: N/A 00:26:59.471 Firmware Activation Without Reset: N/A 00:26:59.471 Multiple Update Detection Support: N/A 00:26:59.471 Firmware Update Granularity: No Information Provided 00:26:59.471 Per-Namespace SMART Log: Yes 00:26:59.471 Asymmetric Namespace Access Log Page: Supported 00:26:59.471 ANA Transition Time : 10 sec 00:26:59.471 00:26:59.471 Asymmetric Namespace Access Capabilities 00:26:59.471 ANA Optimized State : Supported 00:26:59.471 ANA Non-Optimized State : Supported 00:26:59.471 ANA Inaccessible State : Supported 00:26:59.471 ANA Persistent Loss State : Supported 00:26:59.471 ANA Change State : Supported 00:26:59.471 ANAGRPID is not changed : No 00:26:59.471 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:59.471 00:26:59.471 ANA Group Identifier Maximum : 128 00:26:59.471 Number of ANA Group Identifiers : 128 00:26:59.471 Max Number of Allowed Namespaces : 1024 00:26:59.471 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:59.471 Command Effects Log Page: Supported 00:26:59.471 Get Log Page Extended Data: Supported 00:26:59.471 Telemetry Log Pages: Not Supported 00:26:59.471 Persistent Event Log Pages: Not Supported 00:26:59.471 Supported Log Pages Log Page: May Support 00:26:59.471 Commands Supported & Effects Log Page: Not Supported 00:26:59.471 Feature Identifiers & Effects Log Page:May Support 00:26:59.471 NVMe-MI Commands & Effects Log Page: May Support 00:26:59.471 Data Area 4 for Telemetry Log: Not Supported 00:26:59.471 Error Log Page Entries Supported: 128 00:26:59.471 Keep Alive: Supported 00:26:59.471 Keep Alive Granularity: 1000 ms 00:26:59.471 00:26:59.471 NVM Command Set Attributes 00:26:59.471 ========================== 00:26:59.471 Submission Queue Entry Size 00:26:59.471 Max: 64 00:26:59.471 Min: 64 00:26:59.472 Completion Queue Entry Size 00:26:59.472 Max: 16 00:26:59.472 Min: 16 00:26:59.472 Number of Namespaces: 1024 00:26:59.472 Compare Command: Not Supported 00:26:59.472 Write Uncorrectable Command: Not Supported 00:26:59.472 Dataset Management Command: Supported 00:26:59.472 Write Zeroes Command: Supported 00:26:59.472 Set Features Save Field: Not Supported 00:26:59.472 Reservations: Not Supported 00:26:59.472 Timestamp: Not Supported 00:26:59.472 Copy: Not Supported 00:26:59.472 Volatile Write Cache: Present 00:26:59.472 Atomic Write Unit (Normal): 1 00:26:59.472 Atomic Write Unit (PFail): 1 00:26:59.472 Atomic Compare & Write Unit: 1 00:26:59.472 Fused Compare & Write: Not Supported 00:26:59.472 Scatter-Gather List 00:26:59.472 SGL Command Set: Supported 00:26:59.472 SGL Keyed: Not Supported 00:26:59.472 SGL Bit Bucket Descriptor: Not Supported 00:26:59.472 SGL Metadata Pointer: Not Supported 00:26:59.472 Oversized SGL: Not Supported 00:26:59.472 SGL Metadata Address: Not Supported 00:26:59.472 SGL Offset: Supported 00:26:59.472 Transport SGL Data Block: Not Supported 00:26:59.472 Replay Protected Memory Block: Not Supported 00:26:59.472 00:26:59.472 Firmware Slot Information 00:26:59.472 ========================= 00:26:59.472 Active slot: 0 00:26:59.472 00:26:59.472 Asymmetric Namespace Access 00:26:59.472 =========================== 00:26:59.472 Change Count : 0 00:26:59.472 Number of ANA Group Descriptors : 1 00:26:59.472 ANA Group Descriptor : 0 00:26:59.472 ANA Group ID : 1 00:26:59.472 Number of NSID Values : 1 00:26:59.472 Change Count : 0 00:26:59.472 ANA State : 1 00:26:59.472 Namespace Identifier : 1 00:26:59.472 00:26:59.472 Commands Supported and Effects 00:26:59.472 ============================== 00:26:59.472 Admin Commands 00:26:59.472 -------------- 00:26:59.472 Get Log Page (02h): Supported 00:26:59.472 Identify (06h): Supported 00:26:59.472 Abort (08h): Supported 00:26:59.472 Set Features (09h): Supported 00:26:59.472 Get Features (0Ah): Supported 00:26:59.472 Asynchronous Event Request (0Ch): Supported 00:26:59.472 Keep Alive (18h): Supported 00:26:59.472 I/O Commands 00:26:59.472 ------------ 00:26:59.472 Flush (00h): Supported 00:26:59.472 Write (01h): Supported LBA-Change 00:26:59.472 Read (02h): Supported 00:26:59.472 Write Zeroes (08h): Supported LBA-Change 00:26:59.472 Dataset Management (09h): Supported 00:26:59.472 00:26:59.472 Error Log 00:26:59.472 ========= 00:26:59.472 Entry: 0 00:26:59.472 Error Count: 0x3 00:26:59.472 Submission Queue Id: 0x0 00:26:59.472 Command Id: 0x5 00:26:59.472 Phase Bit: 0 00:26:59.472 Status Code: 0x2 00:26:59.472 Status Code Type: 0x0 00:26:59.472 Do Not Retry: 1 00:26:59.472 Error Location: 0x28 00:26:59.472 LBA: 0x0 00:26:59.472 Namespace: 0x0 00:26:59.472 Vendor Log Page: 0x0 00:26:59.472 ----------- 00:26:59.472 Entry: 1 00:26:59.472 Error Count: 0x2 00:26:59.472 Submission Queue Id: 0x0 00:26:59.472 Command Id: 0x5 00:26:59.472 Phase Bit: 0 00:26:59.472 Status Code: 0x2 00:26:59.472 Status Code Type: 0x0 00:26:59.472 Do Not Retry: 1 00:26:59.472 Error Location: 0x28 00:26:59.472 LBA: 0x0 00:26:59.472 Namespace: 0x0 00:26:59.472 Vendor Log Page: 0x0 00:26:59.472 ----------- 00:26:59.472 Entry: 2 00:26:59.472 Error Count: 0x1 00:26:59.472 Submission Queue Id: 0x0 00:26:59.472 Command Id: 0x4 00:26:59.472 Phase Bit: 0 00:26:59.472 Status Code: 0x2 00:26:59.472 Status Code Type: 0x0 00:26:59.472 Do Not Retry: 1 00:26:59.472 Error Location: 0x28 00:26:59.472 LBA: 0x0 00:26:59.472 Namespace: 0x0 00:26:59.472 Vendor Log Page: 0x0 00:26:59.472 00:26:59.472 Number of Queues 00:26:59.472 ================ 00:26:59.472 Number of I/O Submission Queues: 128 00:26:59.472 Number of I/O Completion Queues: 128 00:26:59.472 00:26:59.472 ZNS Specific Controller Data 00:26:59.472 ============================ 00:26:59.472 Zone Append Size Limit: 0 00:26:59.472 00:26:59.472 00:26:59.472 Active Namespaces 00:26:59.472 ================= 00:26:59.472 get_feature(0x05) failed 00:26:59.472 Namespace ID:1 00:26:59.472 Command Set Identifier: NVM (00h) 00:26:59.472 Deallocate: Supported 00:26:59.472 Deallocated/Unwritten Error: Not Supported 00:26:59.472 Deallocated Read Value: Unknown 00:26:59.472 Deallocate in Write Zeroes: Not Supported 00:26:59.472 Deallocated Guard Field: 0xFFFF 00:26:59.472 Flush: Supported 00:26:59.472 Reservation: Not Supported 00:26:59.472 Namespace Sharing Capabilities: Multiple Controllers 00:26:59.472 Size (in LBAs): 3750748848 (1788GiB) 00:26:59.472 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:59.472 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:59.472 UUID: 6a2e00ef-5ae4-4443-ade0-f5084830945a 00:26:59.472 Thin Provisioning: Not Supported 00:26:59.472 Per-NS Atomic Units: Yes 00:26:59.472 Atomic Write Unit (Normal): 8 00:26:59.472 Atomic Write Unit (PFail): 8 00:26:59.472 Preferred Write Granularity: 8 00:26:59.472 Atomic Compare & Write Unit: 8 00:26:59.472 Atomic Boundary Size (Normal): 0 00:26:59.472 Atomic Boundary Size (PFail): 0 00:26:59.472 Atomic Boundary Offset: 0 00:26:59.472 NGUID/EUI64 Never Reused: No 00:26:59.472 ANA group ID: 1 00:26:59.472 Namespace Write Protected: No 00:26:59.472 Number of LBA Formats: 1 00:26:59.472 Current LBA Format: LBA Format #00 00:26:59.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:59.472 00:26:59.472 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:59.472 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:59.472 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:59.472 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:59.472 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:59.472 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:59.472 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:59.472 rmmod nvme_tcp 00:26:59.472 rmmod nvme_fabrics 00:26:59.472 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.733 07:09:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:01.647 07:10:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:05.856 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:05.856 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:05.856 00:27:05.856 real 0m19.592s 00:27:05.856 user 0m5.248s 00:27:05.856 sys 0m11.334s 00:27:05.856 07:10:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.856 07:10:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:05.856 ************************************ 00:27:05.856 END TEST nvmf_identify_kernel_target 00:27:05.856 ************************************ 00:27:05.856 07:10:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:05.856 07:10:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:05.856 07:10:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.856 07:10:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.856 ************************************ 00:27:05.856 START TEST nvmf_auth_host 00:27:05.856 ************************************ 00:27:05.856 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:06.118 * Looking for test storage... 00:27:06.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.118 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:06.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.118 --rc genhtml_branch_coverage=1 00:27:06.118 --rc genhtml_function_coverage=1 00:27:06.118 --rc genhtml_legend=1 00:27:06.118 --rc geninfo_all_blocks=1 00:27:06.118 --rc geninfo_unexecuted_blocks=1 00:27:06.118 00:27:06.118 ' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:06.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.119 --rc genhtml_branch_coverage=1 00:27:06.119 --rc genhtml_function_coverage=1 00:27:06.119 --rc genhtml_legend=1 00:27:06.119 --rc geninfo_all_blocks=1 00:27:06.119 --rc geninfo_unexecuted_blocks=1 00:27:06.119 00:27:06.119 ' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:06.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.119 --rc genhtml_branch_coverage=1 00:27:06.119 --rc genhtml_function_coverage=1 00:27:06.119 --rc genhtml_legend=1 00:27:06.119 --rc geninfo_all_blocks=1 00:27:06.119 --rc geninfo_unexecuted_blocks=1 00:27:06.119 00:27:06.119 ' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:06.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.119 --rc genhtml_branch_coverage=1 00:27:06.119 --rc genhtml_function_coverage=1 00:27:06.119 --rc genhtml_legend=1 00:27:06.119 --rc geninfo_all_blocks=1 00:27:06.119 --rc geninfo_unexecuted_blocks=1 00:27:06.119 00:27:06.119 ' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:06.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:06.119 07:10:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:14.260 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:14.260 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.260 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:14.261 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:14.261 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:14.261 07:10:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:14.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.728 ms 00:27:14.261 00:27:14.261 --- 10.0.0.2 ping statistics --- 00:27:14.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.261 rtt min/avg/max/mdev = 0.728/0.728/0.728/0.000 ms 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:27:14.261 00:27:14.261 --- 10.0.0.1 ping statistics --- 00:27:14.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.261 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=3276878 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 3276878 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3276878 ']' 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.261 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.522 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.523 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:14.523 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:14.523 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:14.523 07:10:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f4fb694faea5626257b029043b979d11 00:27:14.523 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:14.523 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.H2H 00:27:14.523 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f4fb694faea5626257b029043b979d11 0 00:27:14.523 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f4fb694faea5626257b029043b979d11 0 00:27:14.523 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.523 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.523 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f4fb694faea5626257b029043b979d11 00:27:14.523 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:14.523 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.H2H 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.H2H 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.H2H 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=fa4bd3fba1df0ed76d817ff85f6bf623bbcd2349813420b50cb1cf09af82e0bc 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.uUj 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key fa4bd3fba1df0ed76d817ff85f6bf623bbcd2349813420b50cb1cf09af82e0bc 3 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 fa4bd3fba1df0ed76d817ff85f6bf623bbcd2349813420b50cb1cf09af82e0bc 3 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=fa4bd3fba1df0ed76d817ff85f6bf623bbcd2349813420b50cb1cf09af82e0bc 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.uUj 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.uUj 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uUj 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=83790d05c77c100cd8a86e9d29ee6e3c6e5de2f2761fea22 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.JlT 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 83790d05c77c100cd8a86e9d29ee6e3c6e5de2f2761fea22 0 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 83790d05c77c100cd8a86e9d29ee6e3c6e5de2f2761fea22 0 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=83790d05c77c100cd8a86e9d29ee6e3c6e5de2f2761fea22 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.JlT 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.JlT 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.JlT 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3b4697da30b41cea0d8340f124988bc8c2dd5a1f2c32e102 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.iaW 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3b4697da30b41cea0d8340f124988bc8c2dd5a1f2c32e102 2 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3b4697da30b41cea0d8340f124988bc8c2dd5a1f2c32e102 2 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.784 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3b4697da30b41cea0d8340f124988bc8c2dd5a1f2c32e102 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.iaW 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.iaW 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.iaW 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f04f558d88145f5f66e1994ba7c8e9e9 00:27:14.785 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.9eJ 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f04f558d88145f5f66e1994ba7c8e9e9 1 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f04f558d88145f5f66e1994ba7c8e9e9 1 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f04f558d88145f5f66e1994ba7c8e9e9 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.9eJ 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.9eJ 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.9eJ 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=15e8b662bee7ad5d4a360eb4ad7f3a5b 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Gxu 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 15e8b662bee7ad5d4a360eb4ad7f3a5b 1 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 15e8b662bee7ad5d4a360eb4ad7f3a5b 1 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=15e8b662bee7ad5d4a360eb4ad7f3a5b 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:15.045 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Gxu 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Gxu 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Gxu 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0e81f4104c1e197caa288fe1432ecf7090ff18fbcea9103d 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.R9D 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0e81f4104c1e197caa288fe1432ecf7090ff18fbcea9103d 2 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0e81f4104c1e197caa288fe1432ecf7090ff18fbcea9103d 2 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0e81f4104c1e197caa288fe1432ecf7090ff18fbcea9103d 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.R9D 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.R9D 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.R9D 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8730d3715573471533384e4d6c6f59ba 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.oeS 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8730d3715573471533384e4d6c6f59ba 0 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8730d3715573471533384e4d6c6f59ba 0 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8730d3715573471533384e4d6c6f59ba 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.oeS 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.oeS 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.oeS 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:15.046 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c48cc47cf576669db8f199c29c221441a06f2327f4f6a251e7ae755e151438e7 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.zEa 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c48cc47cf576669db8f199c29c221441a06f2327f4f6a251e7ae755e151438e7 3 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c48cc47cf576669db8f199c29c221441a06f2327f4f6a251e7ae755e151438e7 3 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c48cc47cf576669db8f199c29c221441a06f2327f4f6a251e7ae755e151438e7 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.zEa 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.zEa 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zEa 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3276878 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3276878 ']' 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.H2H 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uUj ]] 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uUj 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.307 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JlT 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.iaW ]] 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iaW 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.568 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.9eJ 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Gxu ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gxu 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.R9D 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.oeS ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.oeS 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zEa 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:15.569 07:10:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:18.871 Waiting for block devices as requested 00:27:18.871 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:19.131 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:19.131 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:19.131 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:19.392 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:19.392 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:19.392 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:19.653 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:19.653 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:19.915 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:19.915 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:19.915 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:19.915 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:20.176 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:20.176 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:20.176 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:20.176 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:21.118 No valid GPT data, bailing 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:21.118 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:21.380 00:27:21.380 Discovery Log Number of Records 2, Generation counter 2 00:27:21.380 =====Discovery Log Entry 0====== 00:27:21.380 trtype: tcp 00:27:21.380 adrfam: ipv4 00:27:21.380 subtype: current discovery subsystem 00:27:21.380 treq: not specified, sq flow control disable supported 00:27:21.380 portid: 1 00:27:21.380 trsvcid: 4420 00:27:21.380 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:21.380 traddr: 10.0.0.1 00:27:21.380 eflags: none 00:27:21.380 sectype: none 00:27:21.380 =====Discovery Log Entry 1====== 00:27:21.380 trtype: tcp 00:27:21.380 adrfam: ipv4 00:27:21.380 subtype: nvme subsystem 00:27:21.380 treq: not specified, sq flow control disable supported 00:27:21.380 portid: 1 00:27:21.380 trsvcid: 4420 00:27:21.380 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:21.380 traddr: 10.0.0.1 00:27:21.380 eflags: none 00:27:21.380 sectype: none 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.380 nvme0n1 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.380 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.641 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.641 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.641 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.641 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.641 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.641 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:21.641 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.642 07:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.642 nvme0n1 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.642 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.904 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.905 nvme0n1 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.905 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.166 nvme0n1 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:22.166 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.167 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.428 nvme0n1 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.428 07:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.688 nvme0n1 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.688 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.689 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.949 nvme0n1 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.949 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.950 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.211 nvme0n1 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.211 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.473 nvme0n1 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.473 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.474 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.474 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.474 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.474 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.474 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.474 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.474 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.474 07:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.735 nvme0n1 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.735 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.736 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.997 nvme0n1 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.997 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.258 nvme0n1 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.258 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.259 07:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.519 nvme0n1 00:27:24.519 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.519 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.519 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.520 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.520 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.520 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.781 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.043 nvme0n1 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.043 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.305 nvme0n1 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.305 07:10:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.566 nvme0n1 00:27:25.566 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.566 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.566 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.566 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.566 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.566 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.827 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.090 nvme0n1 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.090 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.351 07:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.612 nvme0n1 00:27:26.612 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.612 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.612 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.612 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.612 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.612 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.612 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.613 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.873 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.136 nvme0n1 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:27.136 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.137 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.709 nvme0n1 00:27:27.709 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.709 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.709 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.709 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.709 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.709 07:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.709 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.280 nvme0n1 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.281 07:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.851 nvme0n1 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.851 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.793 nvme0n1 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:29.793 07:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.793 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.366 nvme0n1 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.366 07:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.937 nvme0n1 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.937 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.198 07:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.769 nvme0n1 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.769 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.031 nvme0n1 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.031 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.292 nvme0n1 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.292 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.292 nvme0n1 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.553 07:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.553 nvme0n1 00:27:32.553 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.553 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.553 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.553 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.553 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.553 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.814 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.815 nvme0n1 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.815 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.076 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.077 nvme0n1 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.077 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.338 nvme0n1 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.338 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.599 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.599 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.600 07:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.600 nvme0n1 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.600 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.861 nvme0n1 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.861 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.122 nvme0n1 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.122 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.383 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.384 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.384 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.384 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.384 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.384 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.384 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.644 nvme0n1 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.644 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.645 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.645 07:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.905 nvme0n1 00:27:34.905 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.906 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.166 nvme0n1 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:35.166 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.167 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.428 nvme0n1 00:27:35.428 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.428 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.428 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.428 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.428 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.428 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.688 07:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.949 nvme0n1 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.949 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.950 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.520 nvme0n1 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:36.520 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.521 07:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.781 nvme0n1 00:27:36.781 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.781 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.781 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.781 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.781 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.781 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.042 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.303 nvme0n1 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.303 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:37.564 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.565 07:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.826 nvme0n1 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.826 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.138 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.430 nvme0n1 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.430 07:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.025 nvme0n1 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.025 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.286 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.287 07:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.859 nvme0n1 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.859 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.860 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.430 nvme0n1 00:27:40.430 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.430 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.430 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.430 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.430 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.430 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.692 07:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.263 nvme0n1 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.263 07:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.205 nvme0n1 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.205 nvme0n1 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.205 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.466 nvme0n1 00:27:42.466 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.466 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.467 07:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.728 nvme0n1 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.728 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.989 nvme0n1 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.989 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.990 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.251 nvme0n1 00:27:43.251 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.251 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.251 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.251 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.251 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.251 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.251 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.252 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.513 nvme0n1 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.513 07:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.774 nvme0n1 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.774 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.035 nvme0n1 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.035 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.295 nvme0n1 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:44.295 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.296 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.556 nvme0n1 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.556 07:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.817 nvme0n1 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.817 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.818 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.078 nvme0n1 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.078 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.338 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.339 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.599 nvme0n1 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.599 07:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.859 nvme0n1 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.859 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.860 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 nvme0n1 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.120 07:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.691 nvme0n1 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:46.691 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.692 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.261 nvme0n1 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.261 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.262 07:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.522 nvme0n1 00:27:47.522 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.522 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.522 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.523 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.523 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.523 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.784 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.045 nvme0n1 00:27:48.045 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.045 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.045 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.045 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.045 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.045 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.307 07:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.567 nvme0n1 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.567 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.568 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.568 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjRmYjY5NGZhZWE1NjI2MjU3YjAyOTA0M2I5NzlkMTFCaEzI: 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: ]] 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE0YmQzZmJhMWRmMGVkNzZkODE3ZmY4NWY2YmY2MjNiYmNkMjM0OTgxMzQyMGI1MGNiMWNmMDlhZjgyZTBiY2txEeI=: 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.827 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.399 nvme0n1 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.399 07:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.972 nvme0n1 00:27:49.972 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.972 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.972 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.972 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.972 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.972 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.234 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.235 07:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.806 nvme0n1 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MWY0MTA0YzFlMTk3Y2FhMjg4ZmUxNDMyZWNmNzA5MGZmMThmYmNlYTkxMDNkqYo5VA==: 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: ]] 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODczMGQzNzE1NTczNDcxNTMzMzg0ZTRkNmM2ZjU5YmGx7+TI: 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.806 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.749 nvme0n1 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzQ4Y2M0N2NmNTc2NjY5ZGI4ZjE5OWMyOWMyMjE0NDFhMDZmMjMyN2Y0ZjZhMjUxZTdhZTc1NWUxNTE0MzhlN9StETw=: 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.749 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.750 07:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.322 nvme0n1 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.322 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.323 request: 00:27:52.323 { 00:27:52.323 "name": "nvme0", 00:27:52.323 "trtype": "tcp", 00:27:52.323 "traddr": "10.0.0.1", 00:27:52.323 "adrfam": "ipv4", 00:27:52.323 "trsvcid": "4420", 00:27:52.323 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.323 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.323 "prchk_reftag": false, 00:27:52.323 "prchk_guard": false, 00:27:52.323 "hdgst": false, 00:27:52.323 "ddgst": false, 00:27:52.323 "allow_unrecognized_csi": false, 00:27:52.323 "method": "bdev_nvme_attach_controller", 00:27:52.323 "req_id": 1 00:27:52.323 } 00:27:52.323 Got JSON-RPC error response 00:27:52.323 response: 00:27:52.323 { 00:27:52.323 "code": -5, 00:27:52.323 "message": "Input/output error" 00:27:52.323 } 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.323 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.584 request: 00:27:52.584 { 00:27:52.584 "name": "nvme0", 00:27:52.584 "trtype": "tcp", 00:27:52.584 "traddr": "10.0.0.1", 00:27:52.584 "adrfam": "ipv4", 00:27:52.584 "trsvcid": "4420", 00:27:52.584 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.584 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.584 "prchk_reftag": false, 00:27:52.584 "prchk_guard": false, 00:27:52.584 "hdgst": false, 00:27:52.584 "ddgst": false, 00:27:52.584 "dhchap_key": "key2", 00:27:52.584 "allow_unrecognized_csi": false, 00:27:52.584 "method": "bdev_nvme_attach_controller", 00:27:52.584 "req_id": 1 00:27:52.584 } 00:27:52.584 Got JSON-RPC error response 00:27:52.584 response: 00:27:52.584 { 00:27:52.584 "code": -5, 00:27:52.584 "message": "Input/output error" 00:27:52.584 } 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.584 07:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.584 request: 00:27:52.584 { 00:27:52.584 "name": "nvme0", 00:27:52.584 "trtype": "tcp", 00:27:52.584 "traddr": "10.0.0.1", 00:27:52.584 "adrfam": "ipv4", 00:27:52.584 "trsvcid": "4420", 00:27:52.584 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.584 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.584 "prchk_reftag": false, 00:27:52.584 "prchk_guard": false, 00:27:52.584 "hdgst": false, 00:27:52.584 "ddgst": false, 00:27:52.584 "dhchap_key": "key1", 00:27:52.584 "dhchap_ctrlr_key": "ckey2", 00:27:52.584 "allow_unrecognized_csi": false, 00:27:52.584 "method": "bdev_nvme_attach_controller", 00:27:52.584 "req_id": 1 00:27:52.584 } 00:27:52.584 Got JSON-RPC error response 00:27:52.584 response: 00:27:52.584 { 00:27:52.584 "code": -5, 00:27:52.584 "message": "Input/output error" 00:27:52.584 } 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.584 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.845 nvme0n1 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.845 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.106 request: 00:27:53.106 { 00:27:53.106 "name": "nvme0", 00:27:53.106 "dhchap_key": "key1", 00:27:53.106 "dhchap_ctrlr_key": "ckey2", 00:27:53.106 "method": "bdev_nvme_set_keys", 00:27:53.106 "req_id": 1 00:27:53.106 } 00:27:53.106 Got JSON-RPC error response 00:27:53.106 response: 00:27:53.106 { 00:27:53.106 "code": -13, 00:27:53.106 "message": "Permission denied" 00:27:53.106 } 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:53.106 07:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:54.049 07:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.049 07:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:54.049 07:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.049 07:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.049 07:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.049 07:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:54.049 07:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:54.991 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.991 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:54.991 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.991 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.991 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM3OTBkMDVjNzdjMTAwY2Q4YTg2ZTlkMjllZTZlM2M2ZTVkZTJmMjc2MWZlYTIyGOgF5w==: 00:27:55.252 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: ]] 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2I0Njk3ZGEzMGI0MWNlYTBkODM0MGYxMjQ5ODhiYzhjMmRkNWExZjJjMzJlMTAyObq5fQ==: 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.253 nvme0n1 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjA0ZjU1OGQ4ODE0NWY1ZjY2ZTE5OTRiYTdjOGU5ZTnGARZ1: 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: ]] 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVlOGI2NjJiZWU3YWQ1ZDRhMzYwZWI0YWQ3ZjNhNWKGRxA4: 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.253 request: 00:27:55.253 { 00:27:55.253 "name": "nvme0", 00:27:55.253 "dhchap_key": "key2", 00:27:55.253 "dhchap_ctrlr_key": "ckey1", 00:27:55.253 "method": "bdev_nvme_set_keys", 00:27:55.253 "req_id": 1 00:27:55.253 } 00:27:55.253 Got JSON-RPC error response 00:27:55.253 response: 00:27:55.253 { 00:27:55.253 "code": -13, 00:27:55.253 "message": "Permission denied" 00:27:55.253 } 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.253 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.513 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:55.513 07:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:56.453 rmmod nvme_tcp 00:27:56.453 rmmod nvme_fabrics 00:27:56.453 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:56.454 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:56.454 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:56.454 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 3276878 ']' 00:27:56.454 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 3276878 00:27:56.454 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3276878 ']' 00:27:56.454 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3276878 00:27:56.454 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:56.454 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:56.454 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3276878 00:27:56.715 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:56.715 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:56.715 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3276878' 00:27:56.715 killing process with pid 3276878 00:27:56.715 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3276878 00:27:56.715 07:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3276878 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.715 07:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.627 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:58.627 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:58.627 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:58.888 07:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:02.190 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.190 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.451 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:03.024 07:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.H2H /tmp/spdk.key-null.JlT /tmp/spdk.key-sha256.9eJ /tmp/spdk.key-sha384.R9D /tmp/spdk.key-sha512.zEa /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:03.024 07:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:06.325 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:06.325 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:06.325 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:06.592 00:28:06.592 real 1m0.795s 00:28:06.592 user 0m54.420s 00:28:06.592 sys 0m16.214s 00:28:06.592 07:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.592 07:11:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.592 ************************************ 00:28:06.592 END TEST nvmf_auth_host 00:28:06.592 ************************************ 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.909 ************************************ 00:28:06.909 START TEST nvmf_digest 00:28:06.909 ************************************ 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:06.909 * Looking for test storage... 00:28:06.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:06.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.909 --rc genhtml_branch_coverage=1 00:28:06.909 --rc genhtml_function_coverage=1 00:28:06.909 --rc genhtml_legend=1 00:28:06.909 --rc geninfo_all_blocks=1 00:28:06.909 --rc geninfo_unexecuted_blocks=1 00:28:06.909 00:28:06.909 ' 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:06.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.909 --rc genhtml_branch_coverage=1 00:28:06.909 --rc genhtml_function_coverage=1 00:28:06.909 --rc genhtml_legend=1 00:28:06.909 --rc geninfo_all_blocks=1 00:28:06.909 --rc geninfo_unexecuted_blocks=1 00:28:06.909 00:28:06.909 ' 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:06.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.909 --rc genhtml_branch_coverage=1 00:28:06.909 --rc genhtml_function_coverage=1 00:28:06.909 --rc genhtml_legend=1 00:28:06.909 --rc geninfo_all_blocks=1 00:28:06.909 --rc geninfo_unexecuted_blocks=1 00:28:06.909 00:28:06.909 ' 00:28:06.909 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:06.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.909 --rc genhtml_branch_coverage=1 00:28:06.910 --rc genhtml_function_coverage=1 00:28:06.910 --rc genhtml_legend=1 00:28:06.910 --rc geninfo_all_blocks=1 00:28:06.910 --rc geninfo_unexecuted_blocks=1 00:28:06.910 00:28:06.910 ' 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.910 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.195 07:11:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:15.336 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:15.336 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:15.336 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:15.336 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.336 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:28:15.337 00:28:15.337 --- 10.0.0.2 ping statistics --- 00:28:15.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.337 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:28:15.337 00:28:15.337 --- 10.0.0.1 ping statistics --- 00:28:15.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.337 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.337 ************************************ 00:28:15.337 START TEST nvmf_digest_clean 00:28:15.337 ************************************ 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=3293862 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 3293862 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3293862 ']' 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.337 07:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.337 [2024-10-16 07:11:14.004096] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:15.337 [2024-10-16 07:11:14.004158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.337 [2024-10-16 07:11:14.094738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.337 [2024-10-16 07:11:14.145311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.337 [2024-10-16 07:11:14.145366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.337 [2024-10-16 07:11:14.145375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.337 [2024-10-16 07:11:14.145382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.337 [2024-10-16 07:11:14.145388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.337 [2024-10-16 07:11:14.146174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.337 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.337 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:15.337 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:15.337 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.337 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.598 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.598 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:15.598 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:15.598 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:15.598 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.598 07:11:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.598 null0 00:28:15.598 [2024-10-16 07:11:14.972763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.598 [2024-10-16 07:11:14.997083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3294142 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3294142 /var/tmp/bperf.sock 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3294142 ']' 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.598 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.598 [2024-10-16 07:11:15.056891] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:15.598 [2024-10-16 07:11:15.056957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294142 ] 00:28:15.861 [2024-10-16 07:11:15.138489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.861 [2024-10-16 07:11:15.191260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.432 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.432 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:16.432 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:16.432 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:16.432 07:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:16.692 07:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.692 07:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.953 nvme0n1 00:28:16.953 07:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:16.953 07:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.214 Running I/O for 2 seconds... 00:28:19.099 18232.00 IOPS, 71.22 MiB/s [2024-10-16T05:11:18.598Z] 19411.00 IOPS, 75.82 MiB/s 00:28:19.099 Latency(us) 00:28:19.099 [2024-10-16T05:11:18.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.099 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:19.099 nvme0n1 : 2.01 19419.53 75.86 0.00 0.00 6581.88 3099.31 23265.28 00:28:19.099 [2024-10-16T05:11:18.598Z] =================================================================================================================== 00:28:19.099 [2024-10-16T05:11:18.598Z] Total : 19419.53 75.86 0.00 0.00 6581.88 3099.31 23265.28 00:28:19.099 { 00:28:19.099 "results": [ 00:28:19.099 { 00:28:19.099 "job": "nvme0n1", 00:28:19.099 "core_mask": "0x2", 00:28:19.099 "workload": "randread", 00:28:19.099 "status": "finished", 00:28:19.099 "queue_depth": 128, 00:28:19.099 "io_size": 4096, 00:28:19.099 "runtime": 2.007773, 00:28:19.099 "iops": 19419.526012153765, 00:28:19.099 "mibps": 75.85752348497564, 00:28:19.099 "io_failed": 0, 00:28:19.099 "io_timeout": 0, 00:28:19.099 "avg_latency_us": 6581.879453535094, 00:28:19.099 "min_latency_us": 3099.306666666667, 00:28:19.099 "max_latency_us": 23265.28 00:28:19.099 } 00:28:19.099 ], 00:28:19.099 "core_count": 1 00:28:19.099 } 00:28:19.099 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:19.099 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:19.099 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:19.099 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:19.099 | select(.opcode=="crc32c") 00:28:19.099 | "\(.module_name) \(.executed)"' 00:28:19.099 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3294142 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3294142 ']' 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3294142 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3294142 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3294142' 00:28:19.360 killing process with pid 3294142 00:28:19.360 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3294142 00:28:19.360 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.360 00:28:19.360 Latency(us) 00:28:19.360 [2024-10-16T05:11:18.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.361 [2024-10-16T05:11:18.860Z] =================================================================================================================== 00:28:19.361 [2024-10-16T05:11:18.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.361 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3294142 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3294891 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3294891 /var/tmp/bperf.sock 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3294891 ']' 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.621 07:11:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.621 [2024-10-16 07:11:18.944919] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:19.621 [2024-10-16 07:11:18.944978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294891 ] 00:28:19.621 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:19.621 Zero copy mechanism will not be used. 00:28:19.621 [2024-10-16 07:11:19.019697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.621 [2024-10-16 07:11:19.048514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.563 07:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:20.563 07:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:20.563 07:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:20.563 07:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:20.563 07:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:20.563 07:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.563 07:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.824 nvme0n1 00:28:20.824 07:11:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:20.824 07:11:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:20.824 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.824 Zero copy mechanism will not be used. 00:28:20.824 Running I/O for 2 seconds... 00:28:23.148 3261.00 IOPS, 407.62 MiB/s [2024-10-16T05:11:22.647Z] 3136.50 IOPS, 392.06 MiB/s 00:28:23.148 Latency(us) 00:28:23.148 [2024-10-16T05:11:22.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.148 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:23.148 nvme0n1 : 2.05 3075.02 384.38 0.00 0.00 5106.54 638.29 48278.19 00:28:23.148 [2024-10-16T05:11:22.647Z] =================================================================================================================== 00:28:23.148 [2024-10-16T05:11:22.647Z] Total : 3075.02 384.38 0.00 0.00 5106.54 638.29 48278.19 00:28:23.148 { 00:28:23.148 "results": [ 00:28:23.148 { 00:28:23.148 "job": "nvme0n1", 00:28:23.148 "core_mask": "0x2", 00:28:23.148 "workload": "randread", 00:28:23.148 "status": "finished", 00:28:23.148 "queue_depth": 16, 00:28:23.148 "io_size": 131072, 00:28:23.148 "runtime": 2.045188, 00:28:23.148 "iops": 3075.022931877167, 00:28:23.148 "mibps": 384.3778664846459, 00:28:23.148 "io_failed": 0, 00:28:23.148 "io_timeout": 0, 00:28:23.148 "avg_latency_us": 5106.538256214554, 00:28:23.148 "min_latency_us": 638.2933333333333, 00:28:23.148 "max_latency_us": 48278.18666666667 00:28:23.148 } 00:28:23.148 ], 00:28:23.148 "core_count": 1 00:28:23.148 } 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:23.148 | select(.opcode=="crc32c") 00:28:23.148 | "\(.module_name) \(.executed)"' 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3294891 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3294891 ']' 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3294891 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3294891 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3294891' 00:28:23.148 killing process with pid 3294891 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3294891 00:28:23.148 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.148 00:28:23.148 Latency(us) 00:28:23.148 [2024-10-16T05:11:22.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.148 [2024-10-16T05:11:22.647Z] =================================================================================================================== 00:28:23.148 [2024-10-16T05:11:22.647Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.148 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3294891 00:28:23.408 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:23.408 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:23.408 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3295577 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3295577 /var/tmp/bperf.sock 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3295577 ']' 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.409 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.409 [2024-10-16 07:11:22.771835] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:23.409 [2024-10-16 07:11:22.771892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295577 ] 00:28:23.409 [2024-10-16 07:11:22.844416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.409 [2024-10-16 07:11:22.873496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.669 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:23.669 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:23.669 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:23.669 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:23.669 07:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:23.669 07:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.669 07:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.239 nvme0n1 00:28:24.239 07:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:24.239 07:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.239 Running I/O for 2 seconds... 00:28:26.562 29927.00 IOPS, 116.90 MiB/s [2024-10-16T05:11:26.061Z] 29767.50 IOPS, 116.28 MiB/s 00:28:26.562 Latency(us) 00:28:26.562 [2024-10-16T05:11:26.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.562 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.562 nvme0n1 : 2.01 29766.48 116.28 0.00 0.00 4293.04 2088.96 16930.13 00:28:26.562 [2024-10-16T05:11:26.061Z] =================================================================================================================== 00:28:26.562 [2024-10-16T05:11:26.061Z] Total : 29766.48 116.28 0.00 0.00 4293.04 2088.96 16930.13 00:28:26.562 { 00:28:26.562 "results": [ 00:28:26.562 { 00:28:26.562 "job": "nvme0n1", 00:28:26.562 "core_mask": "0x2", 00:28:26.562 "workload": "randwrite", 00:28:26.562 "status": "finished", 00:28:26.562 "queue_depth": 128, 00:28:26.562 "io_size": 4096, 00:28:26.562 "runtime": 2.005444, 00:28:26.562 "iops": 29766.475653271795, 00:28:26.562 "mibps": 116.27529552059295, 00:28:26.562 "io_failed": 0, 00:28:26.562 "io_timeout": 0, 00:28:26.562 "avg_latency_us": 4293.038104587207, 00:28:26.562 "min_latency_us": 2088.96, 00:28:26.562 "max_latency_us": 16930.133333333335 00:28:26.562 } 00:28:26.562 ], 00:28:26.562 "core_count": 1 00:28:26.562 } 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:26.562 | select(.opcode=="crc32c") 00:28:26.562 | "\(.module_name) \(.executed)"' 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3295577 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3295577 ']' 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3295577 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3295577 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3295577' 00:28:26.562 killing process with pid 3295577 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3295577 00:28:26.562 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.562 00:28:26.562 Latency(us) 00:28:26.562 [2024-10-16T05:11:26.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.562 [2024-10-16T05:11:26.061Z] =================================================================================================================== 00:28:26.562 [2024-10-16T05:11:26.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.562 07:11:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3295577 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3296268 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3296268 /var/tmp/bperf.sock 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3296268 ']' 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:26.562 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.562 [2024-10-16 07:11:26.056427] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:26.562 [2024-10-16 07:11:26.056483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296268 ] 00:28:26.562 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.562 Zero copy mechanism will not be used. 00:28:26.823 [2024-10-16 07:11:26.133517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.823 [2024-10-16 07:11:26.162151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.394 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.394 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:27.394 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:27.394 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:27.394 07:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:27.655 07:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.655 07:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.226 nvme0n1 00:28:28.226 07:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:28.226 07:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.226 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.226 Zero copy mechanism will not be used. 00:28:28.226 Running I/O for 2 seconds... 00:28:30.107 5525.00 IOPS, 690.62 MiB/s [2024-10-16T05:11:29.606Z] 5639.50 IOPS, 704.94 MiB/s 00:28:30.107 Latency(us) 00:28:30.107 [2024-10-16T05:11:29.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.107 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:30.107 nvme0n1 : 2.01 5634.16 704.27 0.00 0.00 2834.42 1235.63 15619.41 00:28:30.107 [2024-10-16T05:11:29.606Z] =================================================================================================================== 00:28:30.107 [2024-10-16T05:11:29.606Z] Total : 5634.16 704.27 0.00 0.00 2834.42 1235.63 15619.41 00:28:30.107 { 00:28:30.107 "results": [ 00:28:30.107 { 00:28:30.107 "job": "nvme0n1", 00:28:30.107 "core_mask": "0x2", 00:28:30.107 "workload": "randwrite", 00:28:30.107 "status": "finished", 00:28:30.107 "queue_depth": 16, 00:28:30.107 "io_size": 131072, 00:28:30.107 "runtime": 2.005269, 00:28:30.107 "iops": 5634.156813873849, 00:28:30.107 "mibps": 704.2696017342312, 00:28:30.107 "io_failed": 0, 00:28:30.107 "io_timeout": 0, 00:28:30.107 "avg_latency_us": 2834.417135776244, 00:28:30.107 "min_latency_us": 1235.6266666666668, 00:28:30.107 "max_latency_us": 15619.413333333334 00:28:30.107 } 00:28:30.107 ], 00:28:30.107 "core_count": 1 00:28:30.107 } 00:28:30.107 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:30.107 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:30.107 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:30.107 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:30.107 | select(.opcode=="crc32c") 00:28:30.107 | "\(.module_name) \(.executed)"' 00:28:30.107 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3296268 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3296268 ']' 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3296268 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3296268 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3296268' 00:28:30.368 killing process with pid 3296268 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3296268 00:28:30.368 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.368 00:28:30.368 Latency(us) 00:28:30.368 [2024-10-16T05:11:29.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.368 [2024-10-16T05:11:29.867Z] =================================================================================================================== 00:28:30.368 [2024-10-16T05:11:29.867Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.368 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3296268 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3293862 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3293862 ']' 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3293862 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3293862 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3293862' 00:28:30.629 killing process with pid 3293862 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3293862 00:28:30.629 07:11:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3293862 00:28:30.629 00:28:30.629 real 0m16.129s 00:28:30.629 user 0m31.758s 00:28:30.629 sys 0m3.660s 00:28:30.629 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:30.629 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.629 ************************************ 00:28:30.629 END TEST nvmf_digest_clean 00:28:30.629 ************************************ 00:28:30.629 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:30.629 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:30.629 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:30.629 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.889 ************************************ 00:28:30.889 START TEST nvmf_digest_error 00:28:30.889 ************************************ 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=3296982 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 3296982 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3296982 ']' 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.889 07:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.889 [2024-10-16 07:11:30.208144] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:30.889 [2024-10-16 07:11:30.208202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.889 [2024-10-16 07:11:30.297140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.889 [2024-10-16 07:11:30.331953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.889 [2024-10-16 07:11:30.331986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.889 [2024-10-16 07:11:30.331992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.889 [2024-10-16 07:11:30.331997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.889 [2024-10-16 07:11:30.332001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.889 [2024-10-16 07:11:30.332509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.830 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:31.830 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:31.830 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:31.830 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:31.830 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.830 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.830 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.831 [2024-10-16 07:11:31.062509] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.831 null0 00:28:31.831 [2024-10-16 07:11:31.139966] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.831 [2024-10-16 07:11:31.164178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3297327 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3297327 /var/tmp/bperf.sock 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3297327 ']' 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:31.831 07:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.831 [2024-10-16 07:11:31.219977] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:31.831 [2024-10-16 07:11:31.220031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297327 ] 00:28:31.831 [2024-10-16 07:11:31.296924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.831 [2024-10-16 07:11:31.326612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.772 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.033 nvme0n1 00:28:33.033 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:33.033 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.034 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.034 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.034 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:33.034 07:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.296 Running I/O for 2 seconds... 00:28:33.296 [2024-10-16 07:11:32.608240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.608272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.608282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.619847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.619869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.619877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.629465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.629485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.629493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.637272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.637291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.637298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.647236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.647255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.647262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.656668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.656686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.656693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.665363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.665380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.665387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.674821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.674838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.674849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.683539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.683556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.683563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.691600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.691617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.691631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.701054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.701071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.701078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.709814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.709832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.709838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.720207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.720225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.720232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.728956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.728974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.728981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.738945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.738963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.738969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.747857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.747875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.747881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.755839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.755862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.755869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.764983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.765000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.765007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.774266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.774283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.774289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.783234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.783251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.783257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.296 [2024-10-16 07:11:32.792025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.296 [2024-10-16 07:11:32.792042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.296 [2024-10-16 07:11:32.792049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.800246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.800263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.800270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.809246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.809264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.809270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.818681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.818697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.818704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.828019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.828036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.828043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.838204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.838221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.838227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.846686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.846703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.846713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.858651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.858669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.858676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.869563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.869580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.869586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.878774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.878791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.878797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.887952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.887969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.887976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.897683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.897700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.897707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.905754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.905771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-10-16 07:11:32.905778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-10-16 07:11:32.914943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.557 [2024-10-16 07:11:32.914960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.914966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:32.924162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:32.924179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.924186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:32.933192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:32.933212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.933219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:32.941981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:32.941999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.942005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:32.950545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:32.950562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.950569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:32.959690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:32.959708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.959714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:32.968495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:32.968513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.968519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:32.978603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:32.978621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.978627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:32.987709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:32.987726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.987732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:32.996917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:32.996934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:32.996940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:33.005239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:33.005256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:33.005262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:33.014009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:33.014026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:33.014033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:33.023188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:33.023206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:33.023212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:33.032196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:33.032213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:33.032220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:33.040733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:33.040750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:33.040757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.558 [2024-10-16 07:11:33.049040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.558 [2024-10-16 07:11:33.049056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.558 [2024-10-16 07:11:33.049063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.819 [2024-10-16 07:11:33.057926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.819 [2024-10-16 07:11:33.057944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.819 [2024-10-16 07:11:33.057950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.819 [2024-10-16 07:11:33.066996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.819 [2024-10-16 07:11:33.067013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.819 [2024-10-16 07:11:33.067020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.819 [2024-10-16 07:11:33.076609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.819 [2024-10-16 07:11:33.076626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.819 [2024-10-16 07:11:33.076633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.819 [2024-10-16 07:11:33.084619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.819 [2024-10-16 07:11:33.084636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.819 [2024-10-16 07:11:33.084646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.819 [2024-10-16 07:11:33.093921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.819 [2024-10-16 07:11:33.093938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.819 [2024-10-16 07:11:33.093944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.105151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.105168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.105174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.113598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.113616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.113622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.124214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.124232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.124238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.132927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.132945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.132952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.141314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.141332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.141338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.150608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.150625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.150631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.160163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.160180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.160186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.168678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.168698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.168705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.178267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.178283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.178290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.186903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.186920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.186926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.194988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.195005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.195012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.204433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.204450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.204457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.213272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.213289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.213295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.222093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.222110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.222117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.231814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.231831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.231837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.239655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.239671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.239678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.248776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.248793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.248799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.257972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.257989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.257995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.268006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.268022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.268029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.275923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.275940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.275946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.285968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.285984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.285990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.296921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.296938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.296944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.304711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.304728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.304734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.820 [2024-10-16 07:11:33.314410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:33.820 [2024-10-16 07:11:33.314427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.820 [2024-10-16 07:11:33.314433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.081 [2024-10-16 07:11:33.323797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.081 [2024-10-16 07:11:33.323814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.081 [2024-10-16 07:11:33.323824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.081 [2024-10-16 07:11:33.332945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.332962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.332968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.342404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.342421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.342428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.352485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.352502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.352508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.362576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.362594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.362600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.373076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.373093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.373099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.381016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.381033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.381040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.390572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.390589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.390595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.399288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.399305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.399312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.408596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.408613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.408620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.415705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.415722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.415728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.425519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.425536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.425542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.433892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.433909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.433915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.443574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.443591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.443597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.452865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.452882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.452888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.460771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.460788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.460794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.469978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.469994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.470001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.479647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.479663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.479673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.489146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.489163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.489169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.497720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.497737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.497743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.506729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.506747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.506753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.513889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.513906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.513912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.523795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.523812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.523819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.531939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.531956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.531963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.542447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.542463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.542470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.551194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.551210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.551217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.559462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.559481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.559488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.568557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.568574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.568581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.082 [2024-10-16 07:11:33.578005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.082 [2024-10-16 07:11:33.578022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.082 [2024-10-16 07:11:33.578028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.343 [2024-10-16 07:11:33.587144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.343 [2024-10-16 07:11:33.587160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.343 [2024-10-16 07:11:33.587167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.343 27718.00 IOPS, 108.27 MiB/s [2024-10-16T05:11:33.842Z] [2024-10-16 07:11:33.595596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.343 [2024-10-16 07:11:33.595613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.343 [2024-10-16 07:11:33.595619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.343 [2024-10-16 07:11:33.605679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.343 [2024-10-16 07:11:33.605696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.343 [2024-10-16 07:11:33.605702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.343 [2024-10-16 07:11:33.614639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.343 [2024-10-16 07:11:33.614655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.614662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.623934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.623951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.623957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.632349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.632366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.632373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.642250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.642267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.642273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.651254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.651272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.651278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.659836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.659857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.659864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.667877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.667894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.667900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.677013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.677030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.677036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.686309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.686327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.686333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.695034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.695052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.695058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.703857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.703874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.703880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.713531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.713548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.713558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.722018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.722035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.722042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.729836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.729858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.729865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.739089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.739106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.739112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.748159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.748176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.748182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.756960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.756977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.756983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.766342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.766359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.766365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.774130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.774147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.774153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.784794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.784812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.784818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.797010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.797030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.797037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.804777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.804794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.804800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.814312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.814330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.814336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.822888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.822905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.822912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.831990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.832007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.832014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.344 [2024-10-16 07:11:33.840977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.344 [2024-10-16 07:11:33.840994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.344 [2024-10-16 07:11:33.841000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.850510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.850529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.850535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.858214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.858231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.858238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.867603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.867621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.867627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.876758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.876775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.876782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.884634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.884652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.884659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.894964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.894982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.894988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.903065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.903082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.903088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.911703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.911720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.911726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.920919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.920936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.920943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.929604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.929621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.929627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.938840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.938861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.938868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.948398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.948419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.948425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.955550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.955567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.955573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.966173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.966190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.966197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.976361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.976378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.976384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.984740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.984756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.984762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:33.996535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:33.996552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:33.996559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.006081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.006098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.006105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.014927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.014945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.014951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.024166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.024184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.024190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.033044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.033062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.033068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.041839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.041861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.041867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.049911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.049928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.049934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.059338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.059355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.059362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.067790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.067807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.067813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.076785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.076803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.076809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.085604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.607 [2024-10-16 07:11:34.085622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.607 [2024-10-16 07:11:34.085629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.607 [2024-10-16 07:11:34.094739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.608 [2024-10-16 07:11:34.094756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.608 [2024-10-16 07:11:34.094763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.608 [2024-10-16 07:11:34.104791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.608 [2024-10-16 07:11:34.104808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.608 [2024-10-16 07:11:34.104818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.113633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.113650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.113656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.122336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.122353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.122360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.130605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.130623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.130629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.140439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.140457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.140463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.149705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.149722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.149728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.157448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.157465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.157471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.166680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.166697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.166703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.175896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.175913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.175919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.185212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.185233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.185240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.194168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.194185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.194192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.202999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.203016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.203022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.212244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.212261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.212267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.221504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.221522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.221528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.229418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.229435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.229441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.239779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.239797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.239803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.247720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.247737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.247743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.870 [2024-10-16 07:11:34.257076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.870 [2024-10-16 07:11:34.257093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.870 [2024-10-16 07:11:34.257099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.265744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.265762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.265768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.274729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.274747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.274753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.282950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.282967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.282974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.292406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.292423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.292429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.303166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.303183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.303190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.312309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.312327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.312333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.322124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.322141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.322148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.330682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.330699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.330706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.338926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.338943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.338952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.348020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.348038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.348044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.356917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.356934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.356940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.871 [2024-10-16 07:11:34.366643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:34.871 [2024-10-16 07:11:34.366660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.871 [2024-10-16 07:11:34.366666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.132 [2024-10-16 07:11:34.374915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.132 [2024-10-16 07:11:34.374933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.132 [2024-10-16 07:11:34.374940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.132 [2024-10-16 07:11:34.383888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.132 [2024-10-16 07:11:34.383906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.132 [2024-10-16 07:11:34.383912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.132 [2024-10-16 07:11:34.391289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.132 [2024-10-16 07:11:34.391306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.391312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.401611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.401628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.401635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.411772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.411789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.411796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.420492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.420513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.420520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.429445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.429462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.429469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.438943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.438960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.438967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.447499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.447516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.447523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.455689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.455707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.455713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.464466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.464483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.464490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.474837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.474857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.474864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.483857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.483875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.483881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.494894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.494911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.494918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.503911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.503928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.503935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.515775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.515792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.515798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.527050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.527066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.527073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.536286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.536303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.536309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.545654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.545671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.545677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.556598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.556615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.556621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.564649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.564666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.564673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.576228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.576245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.576252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 [2024-10-16 07:11:34.587117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.587137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.587144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.133 27785.50 IOPS, 108.54 MiB/s [2024-10-16T05:11:34.632Z] [2024-10-16 07:11:34.597067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xca4470) 00:28:35.133 [2024-10-16 07:11:34.597084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.133 [2024-10-16 07:11:34.597090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.394 00:28:35.394 Latency(us) 00:28:35.394 [2024-10-16T05:11:34.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.394 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:35.394 nvme0n1 : 2.04 27261.26 106.49 0.00 0.00 4600.60 2266.45 46530.56 00:28:35.394 [2024-10-16T05:11:34.893Z] =================================================================================================================== 00:28:35.394 [2024-10-16T05:11:34.893Z] Total : 27261.26 106.49 0.00 0.00 4600.60 2266.45 46530.56 00:28:35.394 { 00:28:35.394 "results": [ 00:28:35.394 { 00:28:35.394 "job": "nvme0n1", 00:28:35.394 "core_mask": "0x2", 00:28:35.394 "workload": "randread", 00:28:35.394 "status": "finished", 00:28:35.394 "queue_depth": 128, 00:28:35.394 "io_size": 4096, 00:28:35.394 "runtime": 2.043156, 00:28:35.394 "iops": 27261.25660497779, 00:28:35.394 "mibps": 106.48928361319449, 00:28:35.394 "io_failed": 0, 00:28:35.394 "io_timeout": 0, 00:28:35.394 "avg_latency_us": 4600.597284690928, 00:28:35.394 "min_latency_us": 2266.4533333333334, 00:28:35.394 "max_latency_us": 46530.56 00:28:35.394 } 00:28:35.394 ], 00:28:35.394 "core_count": 1 00:28:35.394 } 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:35.394 | .driver_specific 00:28:35.394 | .nvme_error 00:28:35.394 | .status_code 00:28:35.394 | .command_transient_transport_error' 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3297327 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3297327 ']' 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3297327 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.394 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3297327 00:28:35.655 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:35.655 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:35.655 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3297327' 00:28:35.655 killing process with pid 3297327 00:28:35.655 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3297327 00:28:35.655 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.655 00:28:35.655 Latency(us) 00:28:35.655 [2024-10-16T05:11:35.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.655 [2024-10-16T05:11:35.154Z] =================================================================================================================== 00:28:35.655 [2024-10-16T05:11:35.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.655 07:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3297327 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3298015 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3298015 /var/tmp/bperf.sock 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3298015 ']' 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:35.655 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.655 [2024-10-16 07:11:35.066591] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:35.655 [2024-10-16 07:11:35.066646] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3298015 ] 00:28:35.655 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.655 Zero copy mechanism will not be used. 00:28:35.655 [2024-10-16 07:11:35.143448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.915 [2024-10-16 07:11:35.172181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.487 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:36.487 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:36.487 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.487 07:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.747 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:36.747 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.747 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.747 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.747 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.747 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.008 nvme0n1 00:28:37.008 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:37.008 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.008 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.008 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.008 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:37.008 07:11:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.269 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:37.269 Zero copy mechanism will not be used. 00:28:37.269 Running I/O for 2 seconds... 00:28:37.269 [2024-10-16 07:11:36.566228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.269 [2024-10-16 07:11:36.566261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.269 [2024-10-16 07:11:36.566270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.269 [2024-10-16 07:11:36.575055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.269 [2024-10-16 07:11:36.575076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.269 [2024-10-16 07:11:36.575083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.269 [2024-10-16 07:11:36.586198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.269 [2024-10-16 07:11:36.586216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.269 [2024-10-16 07:11:36.586223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.269 [2024-10-16 07:11:36.597386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.269 [2024-10-16 07:11:36.597404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.269 [2024-10-16 07:11:36.597411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.269 [2024-10-16 07:11:36.608394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.269 [2024-10-16 07:11:36.608412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.269 [2024-10-16 07:11:36.608419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.616772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.616789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.616796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.628092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.628110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.628116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.640101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.640118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.640125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.653097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.653115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.653121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.662483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.662500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.662507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.671957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.671974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.671981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.683200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.683219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.683226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.694710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.694727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.694734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.705286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.705304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.705310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.715707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.715725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.715738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.724766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.724784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.724790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.733930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.733948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.733955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.744113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.744131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.744138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.753857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.753876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.753882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.270 [2024-10-16 07:11:36.764660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.270 [2024-10-16 07:11:36.764678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.270 [2024-10-16 07:11:36.764685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.531 [2024-10-16 07:11:36.775958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.531 [2024-10-16 07:11:36.775977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.531 [2024-10-16 07:11:36.775984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.531 [2024-10-16 07:11:36.787588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.531 [2024-10-16 07:11:36.787606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.531 [2024-10-16 07:11:36.787612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.799903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.799922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.799928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.808028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.808046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.808053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.817387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.817405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.817412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.828205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.828224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.828230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.839625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.839644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.839650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.850762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.850780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.850787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.862711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.862729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.862736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.873139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.873158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.873164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.881174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.881192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.881199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.890188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.890206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.890216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.898907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.898925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.898932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.909977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.909995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.910001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.921625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.921643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.921649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.932520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.932538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.932545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.943174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.943193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.943199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.955053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.955071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.955078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.967675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.967693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.967700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.980612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.980630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.980636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:36.993007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:36.993029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:36.993035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:37.004741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:37.004759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:37.004766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:37.014261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:37.014279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:37.014286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:37.022227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:37.022246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:37.022252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.532 [2024-10-16 07:11:37.030505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.532 [2024-10-16 07:11:37.030523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.532 [2024-10-16 07:11:37.030530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.036233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.036252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.036259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.044426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.044444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.044451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.053615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.053633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.053640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.065187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.065205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.065212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.077209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.077226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.077233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.089108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.089126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.089132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.101407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.101426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.101432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.113410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.113429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.113435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.126194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.126212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.126219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.138989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.139008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.139014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.150155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.150173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.150179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.161355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.161373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.161379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.169975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.169992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.170002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.179055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.179074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.179080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.189550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.189568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.189574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.201105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.201123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.201129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.212175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.212193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.212199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.222348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.222367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.222373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.233404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.794 [2024-10-16 07:11:37.233422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.794 [2024-10-16 07:11:37.233429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.794 [2024-10-16 07:11:37.243785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.795 [2024-10-16 07:11:37.243803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.795 [2024-10-16 07:11:37.243810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.795 [2024-10-16 07:11:37.255182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.795 [2024-10-16 07:11:37.255199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.795 [2024-10-16 07:11:37.255206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.795 [2024-10-16 07:11:37.265616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.795 [2024-10-16 07:11:37.265637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.795 [2024-10-16 07:11:37.265644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.795 [2024-10-16 07:11:37.276009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.795 [2024-10-16 07:11:37.276027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.795 [2024-10-16 07:11:37.276034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.795 [2024-10-16 07:11:37.285385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:37.795 [2024-10-16 07:11:37.285403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.795 [2024-10-16 07:11:37.285409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.296432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.296451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.056 [2024-10-16 07:11:37.296457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.306629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.306647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.056 [2024-10-16 07:11:37.306653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.315222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.315240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.056 [2024-10-16 07:11:37.315246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.324867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.324885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.056 [2024-10-16 07:11:37.324892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.333296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.333315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.056 [2024-10-16 07:11:37.333321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.343305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.343324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.056 [2024-10-16 07:11:37.343330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.354685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.354704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.056 [2024-10-16 07:11:37.354711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.365302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.365320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.056 [2024-10-16 07:11:37.365327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.374026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.374045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.056 [2024-10-16 07:11:37.374051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.056 [2024-10-16 07:11:37.385103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.056 [2024-10-16 07:11:37.385122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.385128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.396039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.396057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.396064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.407495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.407514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.407520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.419546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.419565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.419571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.431135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.431154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.431160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.443431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.443448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.443458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.455748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.455767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.455773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.468391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.468410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.468417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.481562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.481580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.481586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.491884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.491902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.491908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.501675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.501694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.501700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.512838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.512861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.512868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.524476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.524493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.524500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.534090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.534108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.534114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.057 [2024-10-16 07:11:37.544800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.057 [2024-10-16 07:11:37.544818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.057 [2024-10-16 07:11:37.544825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.318 [2024-10-16 07:11:37.556206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.318 [2024-10-16 07:11:37.556225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.318 [2024-10-16 07:11:37.556231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.318 2927.00 IOPS, 365.88 MiB/s [2024-10-16T05:11:37.817Z] [2024-10-16 07:11:37.568605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.318 [2024-10-16 07:11:37.568624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.318 [2024-10-16 07:11:37.568630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.318 [2024-10-16 07:11:37.580346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.318 [2024-10-16 07:11:37.580364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.318 [2024-10-16 07:11:37.580370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.318 [2024-10-16 07:11:37.591077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.318 [2024-10-16 07:11:37.591095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.318 [2024-10-16 07:11:37.591101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.318 [2024-10-16 07:11:37.603162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.318 [2024-10-16 07:11:37.603181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.318 [2024-10-16 07:11:37.603187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.614770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.614789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.614796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.626652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.626670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.626677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.638073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.638091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.638101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.649130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.649148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.649154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.660800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.660818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.660824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.672721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.672739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.672746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.685322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.685340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.685346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.696584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.696602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.696609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.708998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.709017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.709024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.718629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.718647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.718653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.729337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.729356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.729363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.739962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.739983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.739989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.749157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.749176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.749183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.760500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.760519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.760525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.770920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.770938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.770944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.781355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.781373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.781379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.793368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.793386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.793393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.803524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.803542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.803549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.319 [2024-10-16 07:11:37.814133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.319 [2024-10-16 07:11:37.814152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.319 [2024-10-16 07:11:37.814158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.824170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.824187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.824194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.834778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.834797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.834803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.845346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.845365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.845371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.853057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.853075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.853082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.860156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.860175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.860181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.870408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.870426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.870432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.881108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.881127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.881133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.890242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.890260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.890267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.900112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.900130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.900137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.911411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.911429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.911439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.920878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.920896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.920902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.927636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.927655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.927661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.938011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.938030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.938037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.948873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.948892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.948898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.961242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.961259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.961265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.973631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.973649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.973656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.985776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.985794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.985800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:37.996721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:37.996739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:37.996746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:38.007571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.581 [2024-10-16 07:11:38.007589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.581 [2024-10-16 07:11:38.007595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.581 [2024-10-16 07:11:38.018595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.582 [2024-10-16 07:11:38.018614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.582 [2024-10-16 07:11:38.018621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.582 [2024-10-16 07:11:38.027458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.582 [2024-10-16 07:11:38.027477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.582 [2024-10-16 07:11:38.027483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.582 [2024-10-16 07:11:38.038563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.582 [2024-10-16 07:11:38.038582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.582 [2024-10-16 07:11:38.038589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.582 [2024-10-16 07:11:38.050302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.582 [2024-10-16 07:11:38.050320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.582 [2024-10-16 07:11:38.050327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.582 [2024-10-16 07:11:38.060515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.582 [2024-10-16 07:11:38.060532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.582 [2024-10-16 07:11:38.060539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.582 [2024-10-16 07:11:38.072189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.582 [2024-10-16 07:11:38.072208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.582 [2024-10-16 07:11:38.072214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.842 [2024-10-16 07:11:38.082018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.082037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.082043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.093879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.093898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.093911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.103747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.103764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.103770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.116289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.116308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.116315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.127643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.127660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.127666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.137809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.137826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.137832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.148040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.148057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.148064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.159902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.159920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.159927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.173111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.173130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.173136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.186785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.186803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.186810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.199486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.199507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.199513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.212228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.212247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.212253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.222143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.222161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.222168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.228765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.228783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.228789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.240021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.240039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.240045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.250151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.250169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.250175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.259976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.259993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.259999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.268250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.268268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.268275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.276419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.276438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.276444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.281651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.281670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.281676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.290779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.290797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.290804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.298072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.298090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.298097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.304293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.304311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.304317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.312794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.312812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.312818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.319234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.319252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.319259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.329079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.329097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.329104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.337174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.337193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.843 [2024-10-16 07:11:38.337199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.843 [2024-10-16 07:11:38.341958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:38.843 [2024-10-16 07:11:38.341976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.844 [2024-10-16 07:11:38.341986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.349082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.349101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.349107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.354364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.354383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.354389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.362761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.362779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.362785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.373082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.373100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.373106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.382455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.382473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.382479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.392441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.392458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.392465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.398461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.398478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.398484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.407807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.407825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.407831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.418062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.418083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.418089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.427543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.427561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.427568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.438114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.438132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.438138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.448400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.448418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.448424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.458079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.458097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.458103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.469697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.469716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.469722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.479799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.479817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.479824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.488613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.488631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.488637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.495064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.495082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.495089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.504015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.504033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.504039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.513592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.513610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.513617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.522163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.522182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.522188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.531001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.531019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.531026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.540131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.540149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.540155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.548527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.548545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.548552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.105 [2024-10-16 07:11:38.557123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1830900) 00:28:39.105 [2024-10-16 07:11:38.557141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.105 [2024-10-16 07:11:38.557147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.105 3008.50 IOPS, 376.06 MiB/s 00:28:39.105 Latency(us) 00:28:39.105 [2024-10-16T05:11:38.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.105 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:39.105 nvme0n1 : 2.00 3011.94 376.49 0.00 0.00 5309.31 962.56 13981.01 00:28:39.105 [2024-10-16T05:11:38.604Z] =================================================================================================================== 00:28:39.105 [2024-10-16T05:11:38.605Z] Total : 3011.94 376.49 0.00 0.00 5309.31 962.56 13981.01 00:28:39.106 { 00:28:39.106 "results": [ 00:28:39.106 { 00:28:39.106 "job": "nvme0n1", 00:28:39.106 "core_mask": "0x2", 00:28:39.106 "workload": "randread", 00:28:39.106 "status": "finished", 00:28:39.106 "queue_depth": 16, 00:28:39.106 "io_size": 131072, 00:28:39.106 "runtime": 2.003027, 00:28:39.106 "iops": 3011.9414266507642, 00:28:39.106 "mibps": 376.49267833134553, 00:28:39.106 "io_failed": 0, 00:28:39.106 "io_timeout": 0, 00:28:39.106 "avg_latency_us": 5309.30845682082, 00:28:39.106 "min_latency_us": 962.56, 00:28:39.106 "max_latency_us": 13981.013333333334 00:28:39.106 } 00:28:39.106 ], 00:28:39.106 "core_count": 1 00:28:39.106 } 00:28:39.106 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:39.106 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:39.106 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:39.106 | .driver_specific 00:28:39.106 | .nvme_error 00:28:39.106 | .status_code 00:28:39.106 | .command_transient_transport_error' 00:28:39.106 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3298015 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3298015 ']' 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3298015 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3298015 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3298015' 00:28:39.494 killing process with pid 3298015 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3298015 00:28:39.494 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.494 00:28:39.494 Latency(us) 00:28:39.494 [2024-10-16T05:11:38.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.494 [2024-10-16T05:11:38.993Z] =================================================================================================================== 00:28:39.494 [2024-10-16T05:11:38.993Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3298015 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3298697 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3298697 /var/tmp/bperf.sock 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3298697 ']' 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:39.494 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.495 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.495 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.495 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.495 07:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.797 [2024-10-16 07:11:38.988344] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:39.797 [2024-10-16 07:11:38.988400] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3298697 ] 00:28:39.797 [2024-10-16 07:11:39.063768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.797 [2024-10-16 07:11:39.092637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.373 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:40.373 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:40.373 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.373 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.634 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:40.634 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.634 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.634 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.634 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.634 07:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.206 nvme0n1 00:28:41.206 07:11:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:41.206 07:11:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.206 07:11:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.206 07:11:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.206 07:11:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:41.206 07:11:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.206 Running I/O for 2 seconds... 00:28:41.206 [2024-10-16 07:11:40.547823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f81e0 00:28:41.206 [2024-10-16 07:11:40.548561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.548587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.556473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6020 00:28:41.206 [2024-10-16 07:11:40.557195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.557212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.565188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:41.206 [2024-10-16 07:11:40.565903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.565919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.573744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6020 00:28:41.206 [2024-10-16 07:11:40.574477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.574494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.582262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:41.206 [2024-10-16 07:11:40.582990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.583006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.590887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f3e60 00:28:41.206 [2024-10-16 07:11:40.591608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.591624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.599386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb760 00:28:41.206 [2024-10-16 07:11:40.600122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.600138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.607878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e95a0 00:28:41.206 [2024-10-16 07:11:40.608608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.608624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.616358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166efae0 00:28:41.206 [2024-10-16 07:11:40.617103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.617119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.624851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6020 00:28:41.206 [2024-10-16 07:11:40.625579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.625602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.633310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f81e0 00:28:41.206 [2024-10-16 07:11:40.634006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.634023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.641771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fa3a0 00:28:41.206 [2024-10-16 07:11:40.642463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.642479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.650209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:41.206 [2024-10-16 07:11:40.650913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.206 [2024-10-16 07:11:40.650929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.206 [2024-10-16 07:11:40.658659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f3e60 00:28:41.207 [2024-10-16 07:11:40.659400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.207 [2024-10-16 07:11:40.659416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.207 [2024-10-16 07:11:40.667134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb760 00:28:41.207 [2024-10-16 07:11:40.667879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.207 [2024-10-16 07:11:40.667895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.207 [2024-10-16 07:11:40.675579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e95a0 00:28:41.207 [2024-10-16 07:11:40.676306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.207 [2024-10-16 07:11:40.676322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.207 [2024-10-16 07:11:40.684038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166efae0 00:28:41.207 [2024-10-16 07:11:40.684773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.207 [2024-10-16 07:11:40.684788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.207 [2024-10-16 07:11:40.692497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6020 00:28:41.207 [2024-10-16 07:11:40.693226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.207 [2024-10-16 07:11:40.693242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.207 [2024-10-16 07:11:40.700950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f81e0 00:28:41.207 [2024-10-16 07:11:40.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.207 [2024-10-16 07:11:40.701703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.709450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fa3a0 00:28:41.468 [2024-10-16 07:11:40.710162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.710178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.717907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:41.468 [2024-10-16 07:11:40.718632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.718648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.726383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f3e60 00:28:41.468 [2024-10-16 07:11:40.727138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.727154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.734861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb760 00:28:41.468 [2024-10-16 07:11:40.735606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.735622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.743309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e95a0 00:28:41.468 [2024-10-16 07:11:40.744008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.744024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.751744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166efae0 00:28:41.468 [2024-10-16 07:11:40.752434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.752450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.760215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6020 00:28:41.468 [2024-10-16 07:11:40.760902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.760919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.768650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f81e0 00:28:41.468 [2024-10-16 07:11:40.769341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.769358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.777109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fa3a0 00:28:41.468 [2024-10-16 07:11:40.777796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.777813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.785539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:41.468 [2024-10-16 07:11:40.786266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.786283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.793971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f3e60 00:28:41.468 [2024-10-16 07:11:40.794695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.794711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.802425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb760 00:28:41.468 [2024-10-16 07:11:40.803137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.803153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.810877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e95a0 00:28:41.468 [2024-10-16 07:11:40.811608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.811624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.819321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166efae0 00:28:41.468 [2024-10-16 07:11:40.820047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.820064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.827765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6020 00:28:41.468 [2024-10-16 07:11:40.828511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.828527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.836228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f81e0 00:28:41.468 [2024-10-16 07:11:40.836970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.836987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.844692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fa3a0 00:28:41.468 [2024-10-16 07:11:40.845424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.845442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.853149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:41.468 [2024-10-16 07:11:40.853895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.853911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.861594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f3e60 00:28:41.468 [2024-10-16 07:11:40.862247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.862263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.870057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb760 00:28:41.468 [2024-10-16 07:11:40.870645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.870661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.879068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eff18 00:28:41.468 [2024-10-16 07:11:40.879888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.879904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.888352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ea248 00:28:41.468 [2024-10-16 07:11:40.889428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.468 [2024-10-16 07:11:40.889444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.468 [2024-10-16 07:11:40.896839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166efae0 00:28:41.468 [2024-10-16 07:11:40.897899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.469 [2024-10-16 07:11:40.897915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.469 [2024-10-16 07:11:40.904903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eee38 00:28:41.469 [2024-10-16 07:11:40.905968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.469 [2024-10-16 07:11:40.905983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:41.469 [2024-10-16 07:11:40.914254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fbcf0 00:28:41.469 [2024-10-16 07:11:40.915460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.469 [2024-10-16 07:11:40.915475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:41.469 [2024-10-16 07:11:40.922724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e4140 00:28:41.469 [2024-10-16 07:11:40.923902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.469 [2024-10-16 07:11:40.923917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:41.469 [2024-10-16 07:11:40.929661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1430 00:28:41.469 [2024-10-16 07:11:40.930398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.469 [2024-10-16 07:11:40.930413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.469 [2024-10-16 07:11:40.938147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fe2e8 00:28:41.469 [2024-10-16 07:11:40.938889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.469 [2024-10-16 07:11:40.938905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.469 [2024-10-16 07:11:40.946589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fc998 00:28:41.469 [2024-10-16 07:11:40.947314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.469 [2024-10-16 07:11:40.947330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.469 [2024-10-16 07:11:40.955046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fb8b8 00:28:41.469 [2024-10-16 07:11:40.955790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.469 [2024-10-16 07:11:40.955805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.469 [2024-10-16 07:11:40.963520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e1b48 00:28:41.469 [2024-10-16 07:11:40.964264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.469 [2024-10-16 07:11:40.964279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:40.972024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166df118 00:28:41.731 [2024-10-16 07:11:40.972759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:40.972774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:40.980467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e01f8 00:28:41.731 [2024-10-16 07:11:40.981167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:40.981183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:40.989039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fcdd0 00:28:41.731 [2024-10-16 07:11:40.989755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:40.989772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:40.998624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fdeb0 00:28:41.731 [2024-10-16 07:11:40.999819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:40.999835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.006932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e5ec8 00:28:41.731 [2024-10-16 07:11:41.007903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.007919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.015427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6b70 00:28:41.731 [2024-10-16 07:11:41.016428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.016444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.023921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166dece0 00:28:41.731 [2024-10-16 07:11:41.024971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.024987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.032402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7538 00:28:41.731 [2024-10-16 07:11:41.033428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.033444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.040889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6fa8 00:28:41.731 [2024-10-16 07:11:41.041906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.041923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.049389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb328 00:28:41.731 [2024-10-16 07:11:41.050392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.050408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.057879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2948 00:28:41.731 [2024-10-16 07:11:41.058878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.058894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.066340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2d80 00:28:41.731 [2024-10-16 07:11:41.067346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.067365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.074821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6b70 00:28:41.731 [2024-10-16 07:11:41.075846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.075862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.083304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166dece0 00:28:41.731 [2024-10-16 07:11:41.084266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.084283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.091798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7538 00:28:41.731 [2024-10-16 07:11:41.092803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.092819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.100269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6fa8 00:28:41.731 [2024-10-16 07:11:41.101276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.101293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.108767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb328 00:28:41.731 [2024-10-16 07:11:41.109786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.109802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.117246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2948 00:28:41.731 [2024-10-16 07:11:41.118250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.118266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.731 [2024-10-16 07:11:41.125779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2d80 00:28:41.731 [2024-10-16 07:11:41.126750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.731 [2024-10-16 07:11:41.126767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.134260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6b70 00:28:41.732 [2024-10-16 07:11:41.135273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.135289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.142759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166dece0 00:28:41.732 [2024-10-16 07:11:41.143720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.143743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.151247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7538 00:28:41.732 [2024-10-16 07:11:41.152246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.152262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.159714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6fa8 00:28:41.732 [2024-10-16 07:11:41.160722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.160738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.168235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb328 00:28:41.732 [2024-10-16 07:11:41.169399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.169415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.176904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2948 00:28:41.732 [2024-10-16 07:11:41.177927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.177943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.185380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2d80 00:28:41.732 [2024-10-16 07:11:41.186387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.186403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.193873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6b70 00:28:41.732 [2024-10-16 07:11:41.194869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.194886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.202350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166dece0 00:28:41.732 [2024-10-16 07:11:41.203370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.203388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.210832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7538 00:28:41.732 [2024-10-16 07:11:41.211854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.211870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.219447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6fa8 00:28:41.732 [2024-10-16 07:11:41.220459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.220475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.732 [2024-10-16 07:11:41.227971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb328 00:28:41.732 [2024-10-16 07:11:41.228972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.732 [2024-10-16 07:11:41.228987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.236461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2948 00:28:41.994 [2024-10-16 07:11:41.237470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.237485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.244959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2d80 00:28:41.994 [2024-10-16 07:11:41.245959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.245975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.253424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6b70 00:28:41.994 [2024-10-16 07:11:41.254426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.254442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.261921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166dece0 00:28:41.994 [2024-10-16 07:11:41.262909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.262925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.270387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7538 00:28:41.994 [2024-10-16 07:11:41.271409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.271425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.278888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6fa8 00:28:41.994 [2024-10-16 07:11:41.279876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.279892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.287368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb328 00:28:41.994 [2024-10-16 07:11:41.288379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.288395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.295852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2948 00:28:41.994 [2024-10-16 07:11:41.296869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.296884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.304337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2d80 00:28:41.994 [2024-10-16 07:11:41.305339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.305355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.312860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6b70 00:28:41.994 [2024-10-16 07:11:41.313860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.313876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.321348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166dece0 00:28:41.994 [2024-10-16 07:11:41.322376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.322391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.329872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7538 00:28:41.994 [2024-10-16 07:11:41.330864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.994 [2024-10-16 07:11:41.330880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.994 [2024-10-16 07:11:41.338373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6fa8 00:28:41.995 [2024-10-16 07:11:41.339384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.339400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.347823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eb328 00:28:41.995 [2024-10-16 07:11:41.349251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.349266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.355433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f8a50 00:28:41.995 [2024-10-16 07:11:41.356175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.356191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.363807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e95a0 00:28:41.995 [2024-10-16 07:11:41.364450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.364468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.372276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fa3a0 00:28:41.995 [2024-10-16 07:11:41.373023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.373039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.380733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f8a50 00:28:41.995 [2024-10-16 07:11:41.381447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.381463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.389187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e95a0 00:28:41.995 [2024-10-16 07:11:41.389921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.389937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.397645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fa3a0 00:28:41.995 [2024-10-16 07:11:41.398416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.398432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.406131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f8a50 00:28:41.995 [2024-10-16 07:11:41.406869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.406885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.414597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e95a0 00:28:41.995 [2024-10-16 07:11:41.415340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.415356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.423078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fa3a0 00:28:41.995 [2024-10-16 07:11:41.423832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.423852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.431526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f8a50 00:28:41.995 [2024-10-16 07:11:41.432279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.432294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.440594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f57b0 00:28:41.995 [2024-10-16 07:11:41.441710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.441725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.449236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fbcf0 00:28:41.995 [2024-10-16 07:11:41.450341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.450358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.457732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ee5c8 00:28:41.995 [2024-10-16 07:11:41.458800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.458817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.466224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6cc8 00:28:41.995 [2024-10-16 07:11:41.467334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.467351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.474721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e7c50 00:28:41.995 [2024-10-16 07:11:41.475856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.475871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.483194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fe2e8 00:28:41.995 [2024-10-16 07:11:41.484283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.484299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:41.995 [2024-10-16 07:11:41.491672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fa7d8 00:28:41.995 [2024-10-16 07:11:41.492799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.995 [2024-10-16 07:11:41.492814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:42.257 [2024-10-16 07:11:41.500172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f4f40 00:28:42.257 [2024-10-16 07:11:41.501280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.257 [2024-10-16 07:11:41.501296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:42.257 [2024-10-16 07:11:41.508651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fbcf0 00:28:42.257 [2024-10-16 07:11:41.509768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.257 [2024-10-16 07:11:41.509783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:42.257 [2024-10-16 07:11:41.517139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ee5c8 00:28:42.257 [2024-10-16 07:11:41.518220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.257 [2024-10-16 07:11:41.518236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:42.257 [2024-10-16 07:11:41.525603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6cc8 00:28:42.257 [2024-10-16 07:11:41.526721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.257 [2024-10-16 07:11:41.526737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:42.257 [2024-10-16 07:11:41.534080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e7c50 00:28:42.257 [2024-10-16 07:11:41.535331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.257 [2024-10-16 07:11:41.535347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:42.257 29830.00 IOPS, 116.52 MiB/s [2024-10-16T05:11:41.756Z] [2024-10-16 07:11:41.542595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fc998 00:28:42.257 [2024-10-16 07:11:41.543706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.257 [2024-10-16 07:11:41.543723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.257 [2024-10-16 07:11:41.551050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:42.257 [2024-10-16 07:11:41.552214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.257 [2024-10-16 07:11:41.552230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.257 [2024-10-16 07:11:41.559524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e9e10 00:28:42.258 [2024-10-16 07:11:41.560607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.560623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.567977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fef90 00:28:42.258 [2024-10-16 07:11:41.569061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.569077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.576431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7970 00:28:42.258 [2024-10-16 07:11:41.577553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.577568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.584915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e7c50 00:28:42.258 [2024-10-16 07:11:41.586035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.586054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.593394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fc998 00:28:42.258 [2024-10-16 07:11:41.594508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.594523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.601890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:42.258 [2024-10-16 07:11:41.602996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.603012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.610345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e9e10 00:28:42.258 [2024-10-16 07:11:41.611463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.611478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.618795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fef90 00:28:42.258 [2024-10-16 07:11:41.619868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.619884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.627247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7970 00:28:42.258 [2024-10-16 07:11:41.628365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.628381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.635712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e7c50 00:28:42.258 [2024-10-16 07:11:41.636827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.636846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.644178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fc998 00:28:42.258 [2024-10-16 07:11:41.645295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.645311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.652645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:42.258 [2024-10-16 07:11:41.653724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.653739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.661116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e9e10 00:28:42.258 [2024-10-16 07:11:41.662233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.662249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.669581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fef90 00:28:42.258 [2024-10-16 07:11:41.670711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.670728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.678104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7970 00:28:42.258 [2024-10-16 07:11:41.679209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.679225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.686569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e7c50 00:28:42.258 [2024-10-16 07:11:41.687693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.687709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.695043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fc998 00:28:42.258 [2024-10-16 07:11:41.696166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.696182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.703519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:42.258 [2024-10-16 07:11:41.704649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.704664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.711977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e9e10 00:28:42.258 [2024-10-16 07:11:41.713055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.713071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.720444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fef90 00:28:42.258 [2024-10-16 07:11:41.721525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.721541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.728927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7970 00:28:42.258 [2024-10-16 07:11:41.730001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.730017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.737399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e7c50 00:28:42.258 [2024-10-16 07:11:41.738510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.738526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.745309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e1b48 00:28:42.258 [2024-10-16 07:11:41.746329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.746344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.258 [2024-10-16 07:11:41.754684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ec840 00:28:42.258 [2024-10-16 07:11:41.755915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.258 [2024-10-16 07:11:41.755930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.520 [2024-10-16 07:11:41.761703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ed4e8 00:28:42.520 [2024-10-16 07:11:41.762470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.520 [2024-10-16 07:11:41.762486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.520 [2024-10-16 07:11:41.770078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ef6a8 00:28:42.520 [2024-10-16 07:11:41.770842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.520 [2024-10-16 07:11:41.770860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.520 [2024-10-16 07:11:41.778522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e27f0 00:28:42.520 [2024-10-16 07:11:41.779292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.520 [2024-10-16 07:11:41.779307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.520 [2024-10-16 07:11:41.786962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2948 00:28:42.520 [2024-10-16 07:11:41.787724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.520 [2024-10-16 07:11:41.787740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.520 [2024-10-16 07:11:41.795404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fe2e8 00:28:42.520 [2024-10-16 07:11:41.796155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.520 [2024-10-16 07:11:41.796170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.520 [2024-10-16 07:11:41.803847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f0350 00:28:42.520 [2024-10-16 07:11:41.804569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.520 [2024-10-16 07:11:41.804587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.812272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e1b48 00:28:42.521 [2024-10-16 07:11:41.813010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.813025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.820738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fb8b8 00:28:42.521 [2024-10-16 07:11:41.821490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.821505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.829183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166dfdc0 00:28:42.521 [2024-10-16 07:11:41.829949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.829964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.837619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eaef0 00:28:42.521 [2024-10-16 07:11:41.838330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.838346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.846066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6300 00:28:42.521 [2024-10-16 07:11:41.846815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.846830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.854489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e3498 00:28:42.521 [2024-10-16 07:11:41.855247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.855263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.862942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e3060 00:28:42.521 [2024-10-16 07:11:41.863685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.863701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.871383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6890 00:28:42.521 [2024-10-16 07:11:41.872154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.872170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.879821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e4de8 00:28:42.521 [2024-10-16 07:11:41.880584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.880599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.888265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e0ea0 00:28:42.521 [2024-10-16 07:11:41.888989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.889005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.896697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166efae0 00:28:42.521 [2024-10-16 07:11:41.897323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.897338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.905393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ed920 00:28:42.521 [2024-10-16 07:11:41.906257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.906272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.913999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eea00 00:28:42.521 [2024-10-16 07:11:41.914833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.914853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.922443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f92c0 00:28:42.521 [2024-10-16 07:11:41.923314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.923331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.930881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ec840 00:28:42.521 [2024-10-16 07:11:41.931763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.931778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.939304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f35f0 00:28:42.521 [2024-10-16 07:11:41.940193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.940209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.947721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6738 00:28:42.521 [2024-10-16 07:11:41.948589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.948605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.956210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6458 00:28:42.521 [2024-10-16 07:11:41.957067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.957084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.964693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e01f8 00:28:42.521 [2024-10-16 07:11:41.965586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.965602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.973164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f4298 00:28:42.521 [2024-10-16 07:11:41.974025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.974041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.981605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166feb58 00:28:42.521 [2024-10-16 07:11:41.982478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.982494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.990062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f96f8 00:28:42.521 [2024-10-16 07:11:41.990945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.990961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:41.998497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f8618 00:28:42.521 [2024-10-16 07:11:41.999370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:41.999385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:42.007061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f7100 00:28:42.521 [2024-10-16 07:11:42.007909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:42.007925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.521 [2024-10-16 07:11:42.015502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f0bc0 00:28:42.521 [2024-10-16 07:11:42.016378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.521 [2024-10-16 07:11:42.016395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.783 [2024-10-16 07:11:42.023971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e7c50 00:28:42.783 [2024-10-16 07:11:42.024854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.783 [2024-10-16 07:11:42.024874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.783 [2024-10-16 07:11:42.032399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f6020 00:28:42.783 [2024-10-16 07:11:42.033272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.783 [2024-10-16 07:11:42.033288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.783 [2024-10-16 07:11:42.040817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e84c0 00:28:42.783 [2024-10-16 07:11:42.041705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.783 [2024-10-16 07:11:42.041720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.783 [2024-10-16 07:11:42.049286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ff3c8 00:28:42.783 [2024-10-16 07:11:42.050148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.783 [2024-10-16 07:11:42.050165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.783 [2024-10-16 07:11:42.057737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f0788 00:28:42.783 [2024-10-16 07:11:42.058635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.783 [2024-10-16 07:11:42.058650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.783 [2024-10-16 07:11:42.066192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f20d8 00:28:42.783 [2024-10-16 07:11:42.067059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.783 [2024-10-16 07:11:42.067075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.783 [2024-10-16 07:11:42.074619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e3d08 00:28:42.784 [2024-10-16 07:11:42.075509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.075526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.083054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f3a28 00:28:42.784 [2024-10-16 07:11:42.083887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.083903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.091493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6b70 00:28:42.784 [2024-10-16 07:11:42.092375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.092391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.099953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166df550 00:28:42.784 [2024-10-16 07:11:42.100829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.100854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.108395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fc128 00:28:42.784 [2024-10-16 07:11:42.109260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.109276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.116827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f46d0 00:28:42.784 [2024-10-16 07:11:42.117703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.117719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.125260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fd640 00:28:42.784 [2024-10-16 07:11:42.126116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.126132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.133679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f4f40 00:28:42.784 [2024-10-16 07:11:42.134558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.134573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.142138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e6fa8 00:28:42.784 [2024-10-16 07:11:42.143014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.143029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.150586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fb048 00:28:42.784 [2024-10-16 07:11:42.151469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.151485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.159054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ea248 00:28:42.784 [2024-10-16 07:11:42.159920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.159935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.167496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e7818 00:28:42.784 [2024-10-16 07:11:42.168389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.168405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.176085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fac10 00:28:42.784 [2024-10-16 07:11:42.176959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.176975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.184540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ed920 00:28:42.784 [2024-10-16 07:11:42.185370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.185385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.192994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eea00 00:28:42.784 [2024-10-16 07:11:42.193867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.193884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.201432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f92c0 00:28:42.784 [2024-10-16 07:11:42.202321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.202337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.209862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ec840 00:28:42.784 [2024-10-16 07:11:42.210743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.210760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.218297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f35f0 00:28:42.784 [2024-10-16 07:11:42.219186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.219201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.227090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ef270 00:28:42.784 [2024-10-16 07:11:42.227840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.227860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.236691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fe720 00:28:42.784 [2024-10-16 07:11:42.237980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.237995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.243518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:42.784 [2024-10-16 07:11:42.244175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.244192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.251978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e1f80 00:28:42.784 [2024-10-16 07:11:42.252613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.252629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.260440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ff3c8 00:28:42.784 [2024-10-16 07:11:42.261059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.261075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.268874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1ca0 00:28:42.784 [2024-10-16 07:11:42.269529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.269544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:42.784 [2024-10-16 07:11:42.277353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e1f80 00:28:42.784 [2024-10-16 07:11:42.278005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.784 [2024-10-16 07:11:42.278021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.285817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ff3c8 00:28:43.046 [2024-10-16 07:11:42.286453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.286469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.293576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f2d80 00:28:43.046 [2024-10-16 07:11:42.294313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.294329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.303014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ec840 00:28:43.046 [2024-10-16 07:11:42.303875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.303891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.311474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f92c0 00:28:43.046 [2024-10-16 07:11:42.312361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.312378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.319938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eea00 00:28:43.046 [2024-10-16 07:11:42.320817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.320836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.328401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ed920 00:28:43.046 [2024-10-16 07:11:42.329283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.329300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.336859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e1b48 00:28:43.046 [2024-10-16 07:11:42.337739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.337754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.345324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f0350 00:28:43.046 [2024-10-16 07:11:42.346187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.346204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.353762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fe2e8 00:28:43.046 [2024-10-16 07:11:42.354607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.354623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.362199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e88f8 00:28:43.046 [2024-10-16 07:11:42.363050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.363066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.370674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f4f40 00:28:43.046 [2024-10-16 07:11:42.371541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.371556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.379128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fd640 00:28:43.046 [2024-10-16 07:11:42.380008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.380024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.387590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f46d0 00:28:43.046 [2024-10-16 07:11:42.388460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.388476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.396040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fb048 00:28:43.046 [2024-10-16 07:11:42.396801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.046 [2024-10-16 07:11:42.396817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.046 [2024-10-16 07:11:42.404485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ea248 00:28:43.046 [2024-10-16 07:11:42.405363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.405379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.412940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e7818 00:28:43.047 [2024-10-16 07:11:42.413813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.413829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.421410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fac10 00:28:43.047 [2024-10-16 07:11:42.422279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.422295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.429859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fc560 00:28:43.047 [2024-10-16 07:11:42.430686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.430702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.438301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e3d08 00:28:43.047 [2024-10-16 07:11:42.439147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.439162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.446739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f20d8 00:28:43.047 [2024-10-16 07:11:42.447625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.447641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.455196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f0788 00:28:43.047 [2024-10-16 07:11:42.456090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.456106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.463665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166ff3c8 00:28:43.047 [2024-10-16 07:11:42.464533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.464549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.472115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166fb480 00:28:43.047 [2024-10-16 07:11:42.472992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.473008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.480563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166eff18 00:28:43.047 [2024-10-16 07:11:42.481431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.481446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.489014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f1868 00:28:43.047 [2024-10-16 07:11:42.489882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.489898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.497437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166e73e0 00:28:43.047 [2024-10-16 07:11:42.498303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.498319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.505888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f8618 00:28:43.047 [2024-10-16 07:11:42.506759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.506775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.514351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f96f8 00:28:43.047 [2024-10-16 07:11:42.515229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.515245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.522814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166feb58 00:28:43.047 [2024-10-16 07:11:42.523652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.523668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 [2024-10-16 07:11:42.531268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f4298 00:28:43.047 [2024-10-16 07:11:42.532098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.532114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:43.047 30017.00 IOPS, 117.25 MiB/s [2024-10-16T05:11:42.546Z] [2024-10-16 07:11:42.539969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08450) with pdu=0x2000166f4f40 00:28:43.047 [2024-10-16 07:11:42.540723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.047 [2024-10-16 07:11:42.540742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:43.047 00:28:43.047 Latency(us) 00:28:43.047 [2024-10-16T05:11:42.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.047 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:43.047 nvme0n1 : 2.00 30035.65 117.33 0.00 0.00 4255.67 2102.61 13817.17 00:28:43.047 [2024-10-16T05:11:42.546Z] =================================================================================================================== 00:28:43.047 [2024-10-16T05:11:42.546Z] Total : 30035.65 117.33 0.00 0.00 4255.67 2102.61 13817.17 00:28:43.047 { 00:28:43.047 "results": [ 00:28:43.047 { 00:28:43.047 "job": "nvme0n1", 00:28:43.047 "core_mask": "0x2", 00:28:43.047 "workload": "randwrite", 00:28:43.047 "status": "finished", 00:28:43.047 "queue_depth": 128, 00:28:43.047 "io_size": 4096, 00:28:43.047 "runtime": 2.004618, 00:28:43.047 "iops": 30035.64768948498, 00:28:43.047 "mibps": 117.3267487870507, 00:28:43.047 "io_failed": 0, 00:28:43.047 "io_timeout": 0, 00:28:43.047 "avg_latency_us": 4255.671322814593, 00:28:43.047 "min_latency_us": 2102.6133333333332, 00:28:43.047 "max_latency_us": 13817.173333333334 00:28:43.047 } 00:28:43.047 ], 00:28:43.047 "core_count": 1 00:28:43.047 } 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:43.308 | .driver_specific 00:28:43.308 | .nvme_error 00:28:43.308 | .status_code 00:28:43.308 | .command_transient_transport_error' 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3298697 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3298697 ']' 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3298697 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.308 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3298697 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3298697' 00:28:43.569 killing process with pid 3298697 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3298697 00:28:43.569 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.569 00:28:43.569 Latency(us) 00:28:43.569 [2024-10-16T05:11:43.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.569 [2024-10-16T05:11:43.068Z] =================================================================================================================== 00:28:43.569 [2024-10-16T05:11:43.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3298697 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3299498 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3299498 /var/tmp/bperf.sock 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3299498 ']' 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:43.569 07:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.569 [2024-10-16 07:11:42.970021] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:43.569 [2024-10-16 07:11:42.970093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3299498 ] 00:28:43.569 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.569 Zero copy mechanism will not be used. 00:28:43.569 [2024-10-16 07:11:43.046378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.830 [2024-10-16 07:11:43.076368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.401 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:44.401 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:44.401 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.401 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.401 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.401 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.401 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.661 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.661 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.661 07:11:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.661 nvme0n1 00:28:44.661 07:11:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:44.661 07:11:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.661 07:11:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.923 07:11:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.923 07:11:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:44.923 07:11:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.923 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.923 Zero copy mechanism will not be used. 00:28:44.923 Running I/O for 2 seconds... 00:28:44.923 [2024-10-16 07:11:44.266610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.923 [2024-10-16 07:11:44.266981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.923 [2024-10-16 07:11:44.267008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.923 [2024-10-16 07:11:44.274531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.923 [2024-10-16 07:11:44.274737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.923 [2024-10-16 07:11:44.274756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.923 [2024-10-16 07:11:44.282891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.923 [2024-10-16 07:11:44.283089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.283106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.286741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.286936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.286953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.290809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.291002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.291018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.294536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.294725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.294742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.298109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.298298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.298314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.301528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.301574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.301590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.306723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.306768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.306783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.311954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.312144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.312160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.316279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.316376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.316391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.324421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.324715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.324733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.329779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.329963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.329979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.333690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.333872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.333889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.337689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.337872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.337888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.341637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.341814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.341830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.345746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.345927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.345943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.349979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.350160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.350177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.353949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.354126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.354142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.359070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.359262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.359279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.363394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.363570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.363587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.368013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.368191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.368207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.372067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.372243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.372259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.376903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.377082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.377098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.381307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.381538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.381557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.390557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.390735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.390752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.394829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.395014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.395030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.399326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.399504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.399520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.408868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.924 [2024-10-16 07:11:44.409211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.924 [2024-10-16 07:11:44.409228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.924 [2024-10-16 07:11:44.416659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.925 [2024-10-16 07:11:44.416838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.925 [2024-10-16 07:11:44.416858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.925 [2024-10-16 07:11:44.421584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:44.925 [2024-10-16 07:11:44.421884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.925 [2024-10-16 07:11:44.421900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.427142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.427321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.427337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.430816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.431001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.431017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.434517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.434695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.434712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.437893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.438069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.438085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.441509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.441687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.441703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.447544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.447746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.447763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.454933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.455237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.455254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.461867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.462241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.462258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.469720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.470045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.470062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.477461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.477532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.477547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.483634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.483677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.483695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.487542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.487585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.487600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.491418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.491470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.491486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.495217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.495267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.495282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.499940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.499990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.187 [2024-10-16 07:11:44.500005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.187 [2024-10-16 07:11:44.503362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.187 [2024-10-16 07:11:44.503409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.503425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.507051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.507105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.507120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.512600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.512666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.512682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.518395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.518467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.518482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.525607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.525650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.525668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.531728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.532029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.532045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.536793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.536871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.536887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.540749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.540796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.540811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.544705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.544749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.544765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.552222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.552294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.552310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.557011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.557075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.557090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.560867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.560923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.560939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.564712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.564769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.564784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.568548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.568606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.568621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.572595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.572641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.572656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.576325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.576422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.576437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.582560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.582605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.582621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.586147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.586207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.586222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.591523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.591573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.591589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.596758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.596867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.596883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.603048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.603114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.603129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.608028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.608216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.608232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.613826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.613875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.613891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.619739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.619817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.619832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.623705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.623764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.623779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.628000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.628074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.628089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.634218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.634285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.634301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.639601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.639661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.639676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.647444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.188 [2024-10-16 07:11:44.647511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.188 [2024-10-16 07:11:44.647526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.188 [2024-10-16 07:11:44.656697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.189 [2024-10-16 07:11:44.656762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.189 [2024-10-16 07:11:44.656778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.189 [2024-10-16 07:11:44.661225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.189 [2024-10-16 07:11:44.661285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.189 [2024-10-16 07:11:44.661304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.189 [2024-10-16 07:11:44.665387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.189 [2024-10-16 07:11:44.665458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.189 [2024-10-16 07:11:44.665473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.189 [2024-10-16 07:11:44.669364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.189 [2024-10-16 07:11:44.669422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.189 [2024-10-16 07:11:44.669438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.189 [2024-10-16 07:11:44.673499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.189 [2024-10-16 07:11:44.673543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.189 [2024-10-16 07:11:44.673558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.189 [2024-10-16 07:11:44.677571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.189 [2024-10-16 07:11:44.677632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.189 [2024-10-16 07:11:44.677647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.189 [2024-10-16 07:11:44.681751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.189 [2024-10-16 07:11:44.681799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.189 [2024-10-16 07:11:44.681815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.688198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.688262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.688278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.692771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.692814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.692829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.696864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.696916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.696932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.701086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.701135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.701150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.708154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.708199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.708214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.712244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.712289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.712304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.718470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.718525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.718540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.724587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.724641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.724656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.728547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.728603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.728618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.732282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.732344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.732359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.740789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.740849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.740865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.745020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.745077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.745092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.749002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.749157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.749172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.756995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.757049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.757064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.765199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.765290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.765305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.768744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.768791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.768807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.772287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.772341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.772356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.778039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.778090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.778105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.781930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.782022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.782037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.785121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.785165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.785180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.788430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.452 [2024-10-16 07:11:44.788486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.452 [2024-10-16 07:11:44.788504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.452 [2024-10-16 07:11:44.791735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.791779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.791794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.794867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.794909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.794923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.799364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.799427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.799442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.802298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.802344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.802359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.806814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.806874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.806890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.810377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.810611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.810626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.815163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.815208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.815223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.820259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.820301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.820316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.825354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.825408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.825423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.829626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.829679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.829695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.832761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.832814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.832829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.836949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.837071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.837087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.842602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.842651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.842666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.849023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.849070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.849086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.853953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.853996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.854012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.857475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.857526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.857541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.861701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.861762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.861777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.865815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.865863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.865879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.869880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.869924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.869940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.875316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.875543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.875558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.881746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.881800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.881815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.886674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.886969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.886987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.895908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.895965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.895981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.901312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.901413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.901428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.905226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.905271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.905286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.909235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.909299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.909316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.919045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.919163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.919178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.929682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.929929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.929945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.453 [2024-10-16 07:11:44.939822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.453 [2024-10-16 07:11:44.940027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.453 [2024-10-16 07:11:44.940042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:44.951301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.716 [2024-10-16 07:11:44.951378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.716 [2024-10-16 07:11:44.951393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:44.962062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.716 [2024-10-16 07:11:44.962348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.716 [2024-10-16 07:11:44.962364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:44.973354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.716 [2024-10-16 07:11:44.973633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.716 [2024-10-16 07:11:44.973648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:44.984206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.716 [2024-10-16 07:11:44.984399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.716 [2024-10-16 07:11:44.984414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:44.995469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.716 [2024-10-16 07:11:44.995772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.716 [2024-10-16 07:11:44.995790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:45.006627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.716 [2024-10-16 07:11:45.006680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.716 [2024-10-16 07:11:45.006696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:45.011940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.716 [2024-10-16 07:11:45.011991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.716 [2024-10-16 07:11:45.012006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:45.015423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.716 [2024-10-16 07:11:45.015475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.716 [2024-10-16 07:11:45.015491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:45.019157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.716 [2024-10-16 07:11:45.019212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.716 [2024-10-16 07:11:45.019227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.716 [2024-10-16 07:11:45.022953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.022997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.023012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.026437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.026492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.026508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.029953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.030000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.030017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.035721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.035777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.035792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.042297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.042483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.042499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.049074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.049124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.049139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.053944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.053995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.054010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.059895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.059958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.059973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.067135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.067187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.067202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.072309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.072373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.072389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.076226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.076276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.076291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.080356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.080408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.080424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.085659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.085714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.085729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.090308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.090366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.090385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.096242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.096309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.096324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.101738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.101782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.101798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.105798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.105859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.105874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.110841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.110905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.110920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.116742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.116792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.116807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.120784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.120827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.120849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.125350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.125467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.125483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.134762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.134825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.134839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.143079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.143127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.143143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.148799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.148864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.148879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.157939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.158041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.158056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.166713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.166772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.166787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.176074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.176159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.176175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.182680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.717 [2024-10-16 07:11:45.182740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.717 [2024-10-16 07:11:45.182755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.717 [2024-10-16 07:11:45.192009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.718 [2024-10-16 07:11:45.192270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.718 [2024-10-16 07:11:45.192285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.718 [2024-10-16 07:11:45.198585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.718 [2024-10-16 07:11:45.198635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.718 [2024-10-16 07:11:45.198650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.718 [2024-10-16 07:11:45.202330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.718 [2024-10-16 07:11:45.202378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.718 [2024-10-16 07:11:45.202394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.718 [2024-10-16 07:11:45.206278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.718 [2024-10-16 07:11:45.206322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.718 [2024-10-16 07:11:45.206337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.718 [2024-10-16 07:11:45.210205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.718 [2024-10-16 07:11:45.210252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.718 [2024-10-16 07:11:45.210267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.718 [2024-10-16 07:11:45.214044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.718 [2024-10-16 07:11:45.214088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.718 [2024-10-16 07:11:45.214104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.218192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.218236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.218251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.226706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.226964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.226979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.233965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.234038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.234053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.241489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.241545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.241560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.250466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.250515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.250530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.979 5588.00 IOPS, 698.50 MiB/s [2024-10-16T05:11:45.478Z] [2024-10-16 07:11:45.257291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.257336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.257357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.265530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.265579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.265595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.273784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.273842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.273862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.282432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.282704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.282721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.290153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.290218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.290233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.295330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.979 [2024-10-16 07:11:45.295409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.979 [2024-10-16 07:11:45.295424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.979 [2024-10-16 07:11:45.301308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.301363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.301378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.307608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.307662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.307678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.316503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.316812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.316829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.326131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.326397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.326412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.335717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.335958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.335973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.342542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.342588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.342603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.347312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.347357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.347372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.353028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.353077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.353093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.360174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.360232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.360247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.367767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.368024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.368039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.377661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.377730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.377746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.387917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.388141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.388157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.398405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.398714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.398730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.409608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.409695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.409710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.420360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.420650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.420666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.431449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.431715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.431731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.442125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.442359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.442374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.453864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.454108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.454123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.465786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.465882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.465897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.980 [2024-10-16 07:11:45.476925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:45.980 [2024-10-16 07:11:45.477229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.980 [2024-10-16 07:11:45.477245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.241 [2024-10-16 07:11:45.488803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.489092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.489111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.500471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.500709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.500725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.511336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.511380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.511396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.516787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.516851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.516867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.525671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.525724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.525740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.532727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.532817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.532832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.537952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.538025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.538041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.544883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.544958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.544974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.552690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.552734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.552751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.559563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.559813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.559828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.567067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.567129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.567144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.572989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.573072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.573088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.578872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.579141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.579156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.583562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.583607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.583623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.589559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.589619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.589635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.595871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.595939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.595954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.601130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.601198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.601214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.605412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.605463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.605479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.614252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.614354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.614369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.622345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.622389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.622405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.627105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.627170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.627186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.631203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.631261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.631277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.635566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.635618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.635634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.640769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.640822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.640837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.647469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.647516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.647532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.656549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.656825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.656842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.667552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.667791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.667809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.678410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.242 [2024-10-16 07:11:45.678791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.242 [2024-10-16 07:11:45.678807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.242 [2024-10-16 07:11:45.689117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.243 [2024-10-16 07:11:45.689409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.243 [2024-10-16 07:11:45.689425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.243 [2024-10-16 07:11:45.700732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.243 [2024-10-16 07:11:45.701035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.243 [2024-10-16 07:11:45.701058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.243 [2024-10-16 07:11:45.711488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.243 [2024-10-16 07:11:45.711730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.243 [2024-10-16 07:11:45.711745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.243 [2024-10-16 07:11:45.722784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.243 [2024-10-16 07:11:45.723040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.243 [2024-10-16 07:11:45.723055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.243 [2024-10-16 07:11:45.730094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.243 [2024-10-16 07:11:45.730138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.243 [2024-10-16 07:11:45.730153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.243 [2024-10-16 07:11:45.736458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.243 [2024-10-16 07:11:45.736516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.243 [2024-10-16 07:11:45.736531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.743390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.743446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.743461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.752029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.752074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.752092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.760695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.760751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.760767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.765665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.765711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.765727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.773379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.773587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.773602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.783113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.783185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.783201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.787865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.787923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.787939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.793073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.793144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.793159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.801712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.801770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.801785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.808110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.808157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.808172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.811906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.811961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.811976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.815482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.815527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.815542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.821992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.822040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.822055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.829559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.829604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.829619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.834112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.834161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.834177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.838355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.838401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.838416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.842591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.842638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.842653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.848457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.848676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.848692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.855738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.856036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.856053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.864858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.865115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.865138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.872634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.872700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.872715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.881823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.508 [2024-10-16 07:11:45.881881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.508 [2024-10-16 07:11:45.881896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.508 [2024-10-16 07:11:45.890507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.890574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.890590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.898671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.898732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.898747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.906643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.906691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.906706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.910442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.910496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.910511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.914153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.914199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.914214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.917691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.917737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.917755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.923095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.923144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.923160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.928694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.928747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.928763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.932161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.932220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.932235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.936318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.936363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.936379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.939881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.939927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.939942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.944163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.944207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.944223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.947492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.947537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.947552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.950931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.950974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.950989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.954292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.954336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.954352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.957855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.957948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.957963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.964413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.964474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.964489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.972034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.972078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.972094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.976855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.976915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.976930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.983182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.983452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.983469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.991073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.991118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.991134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.994707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.994783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.994799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:45.998328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:45.998399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:45.998414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:46.002034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:46.002078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:46.002093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.509 [2024-10-16 07:11:46.005881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.509 [2024-10-16 07:11:46.005926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.509 [2024-10-16 07:11:46.005941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.012310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.012357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.012372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.015784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.015830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.015850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.022592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.022874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.022889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.031666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.031897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.031912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.040447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.040557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.040573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.048624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.048879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.048894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.054379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.054474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.054492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.059206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.059257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.059272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.063211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.063268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.063284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.067599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.067646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.067661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.074264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.074310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.074326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.079475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.079529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.079545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.085998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.086048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.086063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.091886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.091956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.772 [2024-10-16 07:11:46.091971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.772 [2024-10-16 07:11:46.098189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.772 [2024-10-16 07:11:46.098256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.098272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.105129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.105372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.105387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.115169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.115464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.115481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.126082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.126316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.126331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.136532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.136707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.136722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.147476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.147716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.147731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.157726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.158004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.158027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.168422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.168665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.168681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.179256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.179491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.179507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.189056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.189357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.189374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.199805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.200014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.200029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.209986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.210196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.210212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.221177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.221421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.221437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.231313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.231624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.231640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.236835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.236886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.236901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.240493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.240543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.240559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.244084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.244131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.244146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.249116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.249198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.249212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.773 [2024-10-16 07:11:46.252744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa08790) with pdu=0x2000166fef90 00:28:46.773 [2024-10-16 07:11:46.252810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.773 [2024-10-16 07:11:46.252828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.773 4965.00 IOPS, 620.62 MiB/s 00:28:46.773 Latency(us) 00:28:46.773 [2024-10-16T05:11:46.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.773 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:46.773 nvme0n1 : 2.00 4962.57 620.32 0.00 0.00 3218.74 1378.99 12615.68 00:28:46.773 [2024-10-16T05:11:46.272Z] =================================================================================================================== 00:28:46.773 [2024-10-16T05:11:46.272Z] Total : 4962.57 620.32 0.00 0.00 3218.74 1378.99 12615.68 00:28:46.773 { 00:28:46.773 "results": [ 00:28:46.773 { 00:28:46.773 "job": "nvme0n1", 00:28:46.773 "core_mask": "0x2", 00:28:46.773 "workload": "randwrite", 00:28:46.773 "status": "finished", 00:28:46.773 "queue_depth": 16, 00:28:46.773 "io_size": 131072, 00:28:46.773 "runtime": 2.004202, 00:28:46.773 "iops": 4962.573632797493, 00:28:46.773 "mibps": 620.3217040996866, 00:28:46.773 "io_failed": 0, 00:28:46.773 "io_timeout": 0, 00:28:46.773 "avg_latency_us": 3218.742103358134, 00:28:46.773 "min_latency_us": 1378.9866666666667, 00:28:46.773 "max_latency_us": 12615.68 00:28:46.773 } 00:28:46.773 ], 00:28:46.773 "core_count": 1 00:28:46.773 } 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:47.035 | .driver_specific 00:28:47.035 | .nvme_error 00:28:47.035 | .status_code 00:28:47.035 | .command_transient_transport_error' 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 320 > 0 )) 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3299498 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3299498 ']' 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3299498 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:47.035 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3299498 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3299498' 00:28:47.297 killing process with pid 3299498 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3299498 00:28:47.297 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.297 00:28:47.297 Latency(us) 00:28:47.297 [2024-10-16T05:11:46.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.297 [2024-10-16T05:11:46.796Z] =================================================================================================================== 00:28:47.297 [2024-10-16T05:11:46.796Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3299498 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3296982 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3296982 ']' 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3296982 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3296982 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3296982' 00:28:47.297 killing process with pid 3296982 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3296982 00:28:47.297 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3296982 00:28:47.558 00:28:47.558 real 0m16.671s 00:28:47.558 user 0m33.222s 00:28:47.558 sys 0m3.487s 00:28:47.558 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.559 ************************************ 00:28:47.559 END TEST nvmf_digest_error 00:28:47.559 ************************************ 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.559 rmmod nvme_tcp 00:28:47.559 rmmod nvme_fabrics 00:28:47.559 rmmod nvme_keyring 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 3296982 ']' 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 3296982 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3296982 ']' 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3296982 00:28:47.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3296982) - No such process 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3296982 is not found' 00:28:47.559 Process with pid 3296982 is not found 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.559 07:11:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.103 00:28:50.103 real 0m42.850s 00:28:50.103 user 1m7.196s 00:28:50.103 sys 0m12.936s 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:50.103 ************************************ 00:28:50.103 END TEST nvmf_digest 00:28:50.103 ************************************ 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.103 ************************************ 00:28:50.103 START TEST nvmf_bdevperf 00:28:50.103 ************************************ 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:50.103 * Looking for test storage... 00:28:50.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:50.103 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:50.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.104 --rc genhtml_branch_coverage=1 00:28:50.104 --rc genhtml_function_coverage=1 00:28:50.104 --rc genhtml_legend=1 00:28:50.104 --rc geninfo_all_blocks=1 00:28:50.104 --rc geninfo_unexecuted_blocks=1 00:28:50.104 00:28:50.104 ' 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:50.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.104 --rc genhtml_branch_coverage=1 00:28:50.104 --rc genhtml_function_coverage=1 00:28:50.104 --rc genhtml_legend=1 00:28:50.104 --rc geninfo_all_blocks=1 00:28:50.104 --rc geninfo_unexecuted_blocks=1 00:28:50.104 00:28:50.104 ' 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:50.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.104 --rc genhtml_branch_coverage=1 00:28:50.104 --rc genhtml_function_coverage=1 00:28:50.104 --rc genhtml_legend=1 00:28:50.104 --rc geninfo_all_blocks=1 00:28:50.104 --rc geninfo_unexecuted_blocks=1 00:28:50.104 00:28:50.104 ' 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:50.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.104 --rc genhtml_branch_coverage=1 00:28:50.104 --rc genhtml_function_coverage=1 00:28:50.104 --rc genhtml_legend=1 00:28:50.104 --rc geninfo_all_blocks=1 00:28:50.104 --rc geninfo_unexecuted_blocks=1 00:28:50.104 00:28:50.104 ' 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.104 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:50.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.105 07:11:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.248 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:58.249 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:58.249 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:58.249 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:58.249 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:28:58.249 00:28:58.249 --- 10.0.0.2 ping statistics --- 00:28:58.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.249 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:28:58.249 00:28:58.249 --- 10.0.0.1 ping statistics --- 00:28:58.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.249 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:58.249 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3304411 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3304411 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3304411 ']' 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.250 07:11:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.250 [2024-10-16 07:11:56.885658] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:58.250 [2024-10-16 07:11:56.885725] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.250 [2024-10-16 07:11:56.975438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:58.250 [2024-10-16 07:11:57.027363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.250 [2024-10-16 07:11:57.027411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.250 [2024-10-16 07:11:57.027420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.250 [2024-10-16 07:11:57.027427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.250 [2024-10-16 07:11:57.027434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.250 [2024-10-16 07:11:57.029537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.250 [2024-10-16 07:11:57.029698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.250 [2024-10-16 07:11:57.029700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.250 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.250 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:58.250 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:58.250 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.250 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.250 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.250 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:58.250 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.250 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.511 [2024-10-16 07:11:57.748775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.511 Malloc0 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.511 [2024-10-16 07:11:57.823356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:58.511 { 00:28:58.511 "params": { 00:28:58.511 "name": "Nvme$subsystem", 00:28:58.511 "trtype": "$TEST_TRANSPORT", 00:28:58.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.511 "adrfam": "ipv4", 00:28:58.511 "trsvcid": "$NVMF_PORT", 00:28:58.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.511 "hdgst": ${hdgst:-false}, 00:28:58.511 "ddgst": ${ddgst:-false} 00:28:58.511 }, 00:28:58.511 "method": "bdev_nvme_attach_controller" 00:28:58.511 } 00:28:58.511 EOF 00:28:58.511 )") 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:58.511 07:11:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:58.511 "params": { 00:28:58.511 "name": "Nvme1", 00:28:58.511 "trtype": "tcp", 00:28:58.511 "traddr": "10.0.0.2", 00:28:58.511 "adrfam": "ipv4", 00:28:58.511 "trsvcid": "4420", 00:28:58.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:58.511 "hdgst": false, 00:28:58.511 "ddgst": false 00:28:58.511 }, 00:28:58.511 "method": "bdev_nvme_attach_controller" 00:28:58.511 }' 00:28:58.511 [2024-10-16 07:11:57.881632] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:28:58.511 [2024-10-16 07:11:57.881697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3304716 ] 00:28:58.511 [2024-10-16 07:11:57.961551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.772 [2024-10-16 07:11:58.014433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.034 Running I/O for 1 seconds... 00:28:59.976 8480.00 IOPS, 33.12 MiB/s 00:28:59.976 Latency(us) 00:28:59.976 [2024-10-16T05:11:59.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.976 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.976 Verification LBA range: start 0x0 length 0x4000 00:28:59.976 Nvme1n1 : 1.01 8537.58 33.35 0.00 0.00 14909.65 928.43 14527.15 00:28:59.976 [2024-10-16T05:11:59.475Z] =================================================================================================================== 00:28:59.976 [2024-10-16T05:11:59.475Z] Total : 8537.58 33.35 0.00 0.00 14909.65 928.43 14527.15 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3304982 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:00.238 { 00:29:00.238 "params": { 00:29:00.238 "name": "Nvme$subsystem", 00:29:00.238 "trtype": "$TEST_TRANSPORT", 00:29:00.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.238 "adrfam": "ipv4", 00:29:00.238 "trsvcid": "$NVMF_PORT", 00:29:00.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.238 "hdgst": ${hdgst:-false}, 00:29:00.238 "ddgst": ${ddgst:-false} 00:29:00.238 }, 00:29:00.238 "method": "bdev_nvme_attach_controller" 00:29:00.238 } 00:29:00.238 EOF 00:29:00.238 )") 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:29:00.238 07:11:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:00.238 "params": { 00:29:00.238 "name": "Nvme1", 00:29:00.238 "trtype": "tcp", 00:29:00.238 "traddr": "10.0.0.2", 00:29:00.238 "adrfam": "ipv4", 00:29:00.238 "trsvcid": "4420", 00:29:00.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.238 "hdgst": false, 00:29:00.238 "ddgst": false 00:29:00.238 }, 00:29:00.238 "method": "bdev_nvme_attach_controller" 00:29:00.238 }' 00:29:00.238 [2024-10-16 07:11:59.550719] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:29:00.238 [2024-10-16 07:11:59.550778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3304982 ] 00:29:00.238 [2024-10-16 07:11:59.626851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.238 [2024-10-16 07:11:59.662527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.498 Running I/O for 15 seconds... 00:29:02.380 10328.00 IOPS, 40.34 MiB/s [2024-10-16T05:12:02.824Z] 10766.00 IOPS, 42.05 MiB/s [2024-10-16T05:12:02.824Z] 07:12:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3304411 00:29:03.325 07:12:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:03.325 [2024-10-16 07:12:02.514035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.325 [2024-10-16 07:12:02.514077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.325 [2024-10-16 07:12:02.514650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.325 [2024-10-16 07:12:02.514658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.514987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.514997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.326 [2024-10-16 07:12:02.515448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.326 [2024-10-16 07:12:02.515457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.515990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.515997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-10-16 07:12:02.516166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.327 [2024-10-16 07:12:02.516175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.328 [2024-10-16 07:12:02.516183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.328 [2024-10-16 07:12:02.516204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.328 [2024-10-16 07:12:02.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.328 [2024-10-16 07:12:02.516238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.328 [2024-10-16 07:12:02.516254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.328 [2024-10-16 07:12:02.516271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.328 [2024-10-16 07:12:02.516289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.328 [2024-10-16 07:12:02.516306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.328 [2024-10-16 07:12:02.516323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.328 [2024-10-16 07:12:02.516339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.328 [2024-10-16 07:12:02.516356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.328 [2024-10-16 07:12:02.516373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.328 [2024-10-16 07:12:02.516390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fd2e0 is same with the state(6) to be set 00:29:03.328 [2024-10-16 07:12:02.516409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:03.328 [2024-10-16 07:12:02.516416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:03.328 [2024-10-16 07:12:02.516423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104072 len:8 PRP1 0x0 PRP2 0x0 00:29:03.328 [2024-10-16 07:12:02.516431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.328 [2024-10-16 07:12:02.516470] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20fd2e0 was disconnected and freed. reset controller. 00:29:03.328 [2024-10-16 07:12:02.520010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.328 [2024-10-16 07:12:02.520058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.328 [2024-10-16 07:12:02.520784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.328 [2024-10-16 07:12:02.520801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.328 [2024-10-16 07:12:02.520810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.328 [2024-10-16 07:12:02.521036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.328 [2024-10-16 07:12:02.521261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.328 [2024-10-16 07:12:02.521270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.328 [2024-10-16 07:12:02.521279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.328 [2024-10-16 07:12:02.524828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.328 [2024-10-16 07:12:02.534226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.328 [2024-10-16 07:12:02.534796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.328 [2024-10-16 07:12:02.534815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.328 [2024-10-16 07:12:02.534823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.328 [2024-10-16 07:12:02.535051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.328 [2024-10-16 07:12:02.535270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.328 [2024-10-16 07:12:02.535279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.328 [2024-10-16 07:12:02.535286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.328 [2024-10-16 07:12:02.538826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.328 [2024-10-16 07:12:02.548059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.328 [2024-10-16 07:12:02.548600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.328 [2024-10-16 07:12:02.548617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.328 [2024-10-16 07:12:02.548625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.328 [2024-10-16 07:12:02.548851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.328 [2024-10-16 07:12:02.549072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.328 [2024-10-16 07:12:02.549079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.328 [2024-10-16 07:12:02.549091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.328 [2024-10-16 07:12:02.552748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.328 [2024-10-16 07:12:02.561970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.328 [2024-10-16 07:12:02.562512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.328 [2024-10-16 07:12:02.562529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.328 [2024-10-16 07:12:02.562537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.328 [2024-10-16 07:12:02.562756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.328 [2024-10-16 07:12:02.562983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.328 [2024-10-16 07:12:02.562993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.328 [2024-10-16 07:12:02.563000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.328 [2024-10-16 07:12:02.566542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.328 [2024-10-16 07:12:02.575956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.328 [2024-10-16 07:12:02.576527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.328 [2024-10-16 07:12:02.576544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.328 [2024-10-16 07:12:02.576552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.328 [2024-10-16 07:12:02.576771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.328 [2024-10-16 07:12:02.576995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.328 [2024-10-16 07:12:02.577005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.328 [2024-10-16 07:12:02.577012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.328 [2024-10-16 07:12:02.580560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.328 [2024-10-16 07:12:02.589763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.328 [2024-10-16 07:12:02.590311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.328 [2024-10-16 07:12:02.590328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.328 [2024-10-16 07:12:02.590336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.328 [2024-10-16 07:12:02.590556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.328 [2024-10-16 07:12:02.590775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.328 [2024-10-16 07:12:02.590784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.328 [2024-10-16 07:12:02.590792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.328 [2024-10-16 07:12:02.594339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.328 [2024-10-16 07:12:02.603602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.328 [2024-10-16 07:12:02.604151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.328 [2024-10-16 07:12:02.604168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.328 [2024-10-16 07:12:02.604176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.328 [2024-10-16 07:12:02.604395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.328 [2024-10-16 07:12:02.604615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.328 [2024-10-16 07:12:02.604624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.604631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.329 [2024-10-16 07:12:02.608198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.329 [2024-10-16 07:12:02.617608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.329 [2024-10-16 07:12:02.618146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.329 [2024-10-16 07:12:02.618163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.329 [2024-10-16 07:12:02.618171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.329 [2024-10-16 07:12:02.618391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.329 [2024-10-16 07:12:02.618609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.329 [2024-10-16 07:12:02.618618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.618625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.329 [2024-10-16 07:12:02.622178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.329 [2024-10-16 07:12:02.631618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.329 [2024-10-16 07:12:02.632187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.329 [2024-10-16 07:12:02.632206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.329 [2024-10-16 07:12:02.632214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.329 [2024-10-16 07:12:02.632433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.329 [2024-10-16 07:12:02.632652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.329 [2024-10-16 07:12:02.632661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.632668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.329 [2024-10-16 07:12:02.636222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.329 [2024-10-16 07:12:02.645430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.329 [2024-10-16 07:12:02.645971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.329 [2024-10-16 07:12:02.645990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.329 [2024-10-16 07:12:02.645998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.329 [2024-10-16 07:12:02.646217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.329 [2024-10-16 07:12:02.646442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.329 [2024-10-16 07:12:02.646451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.646459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.329 [2024-10-16 07:12:02.650017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.329 [2024-10-16 07:12:02.659245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.329 [2024-10-16 07:12:02.659915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.329 [2024-10-16 07:12:02.659964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.329 [2024-10-16 07:12:02.659976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.329 [2024-10-16 07:12:02.660223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.329 [2024-10-16 07:12:02.660447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.329 [2024-10-16 07:12:02.660456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.660463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.329 [2024-10-16 07:12:02.664033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.329 [2024-10-16 07:12:02.673247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.329 [2024-10-16 07:12:02.673860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.329 [2024-10-16 07:12:02.673885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.329 [2024-10-16 07:12:02.673894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.329 [2024-10-16 07:12:02.674115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.329 [2024-10-16 07:12:02.674335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.329 [2024-10-16 07:12:02.674345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.674352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.329 [2024-10-16 07:12:02.677907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.329 [2024-10-16 07:12:02.687104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.329 [2024-10-16 07:12:02.687648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.329 [2024-10-16 07:12:02.687667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.329 [2024-10-16 07:12:02.687675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.329 [2024-10-16 07:12:02.687902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.329 [2024-10-16 07:12:02.688123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.329 [2024-10-16 07:12:02.688132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.688140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.329 [2024-10-16 07:12:02.691699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.329 [2024-10-16 07:12:02.700907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.329 [2024-10-16 07:12:02.701470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.329 [2024-10-16 07:12:02.701490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.329 [2024-10-16 07:12:02.701498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.329 [2024-10-16 07:12:02.701718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.329 [2024-10-16 07:12:02.701946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.329 [2024-10-16 07:12:02.701956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.701963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.329 [2024-10-16 07:12:02.705513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.329 [2024-10-16 07:12:02.714723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.329 [2024-10-16 07:12:02.715285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.329 [2024-10-16 07:12:02.715304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.329 [2024-10-16 07:12:02.715313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.329 [2024-10-16 07:12:02.715533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.329 [2024-10-16 07:12:02.715753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.329 [2024-10-16 07:12:02.715762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.715769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.329 [2024-10-16 07:12:02.719327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.329 [2024-10-16 07:12:02.728541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.329 [2024-10-16 07:12:02.729107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.329 [2024-10-16 07:12:02.729130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.329 [2024-10-16 07:12:02.729138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.329 [2024-10-16 07:12:02.729359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.329 [2024-10-16 07:12:02.729579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.329 [2024-10-16 07:12:02.729590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.329 [2024-10-16 07:12:02.729598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.330 [2024-10-16 07:12:02.733156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.330 [2024-10-16 07:12:02.742363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.330 [2024-10-16 07:12:02.742923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.330 [2024-10-16 07:12:02.742943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.330 [2024-10-16 07:12:02.742957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.330 [2024-10-16 07:12:02.743178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.330 [2024-10-16 07:12:02.743399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.330 [2024-10-16 07:12:02.743409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.330 [2024-10-16 07:12:02.743416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.330 [2024-10-16 07:12:02.746973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.330 [2024-10-16 07:12:02.756247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.330 [2024-10-16 07:12:02.756837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.330 [2024-10-16 07:12:02.756870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.330 [2024-10-16 07:12:02.756878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.330 [2024-10-16 07:12:02.757100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.330 [2024-10-16 07:12:02.757321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.330 [2024-10-16 07:12:02.757331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.330 [2024-10-16 07:12:02.757338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.330 [2024-10-16 07:12:02.760904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.330 [2024-10-16 07:12:02.770128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.330 [2024-10-16 07:12:02.770731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.330 [2024-10-16 07:12:02.770755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.330 [2024-10-16 07:12:02.770765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.330 [2024-10-16 07:12:02.770995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.330 [2024-10-16 07:12:02.771218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.330 [2024-10-16 07:12:02.771227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.330 [2024-10-16 07:12:02.771237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.330 [2024-10-16 07:12:02.774791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.330 [2024-10-16 07:12:02.784011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.330 [2024-10-16 07:12:02.784713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.330 [2024-10-16 07:12:02.784777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.330 [2024-10-16 07:12:02.784793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.330 [2024-10-16 07:12:02.785061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.330 [2024-10-16 07:12:02.785298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.330 [2024-10-16 07:12:02.785308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.330 [2024-10-16 07:12:02.785316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.330 [2024-10-16 07:12:02.788898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.330 [2024-10-16 07:12:02.797921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.330 [2024-10-16 07:12:02.798504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.330 [2024-10-16 07:12:02.798532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.330 [2024-10-16 07:12:02.798541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.330 [2024-10-16 07:12:02.798764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.330 [2024-10-16 07:12:02.798996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.330 [2024-10-16 07:12:02.799007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.330 [2024-10-16 07:12:02.799016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.330 [2024-10-16 07:12:02.802576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.330 [2024-10-16 07:12:02.811786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.330 [2024-10-16 07:12:02.812447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.330 [2024-10-16 07:12:02.812509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.330 [2024-10-16 07:12:02.812523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.330 [2024-10-16 07:12:02.812778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.330 [2024-10-16 07:12:02.813016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.330 [2024-10-16 07:12:02.813026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.330 [2024-10-16 07:12:02.813034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.330 [2024-10-16 07:12:02.816608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.593 9795.33 IOPS, 38.26 MiB/s [2024-10-16T05:12:03.092Z] [2024-10-16 07:12:02.825626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.593 [2024-10-16 07:12:02.826279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.593 [2024-10-16 07:12:02.826308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.593 [2024-10-16 07:12:02.826317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.593 [2024-10-16 07:12:02.826541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.593 [2024-10-16 07:12:02.826763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.593 [2024-10-16 07:12:02.826772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.593 [2024-10-16 07:12:02.826780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.593 [2024-10-16 07:12:02.830357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.593 [2024-10-16 07:12:02.839566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.593 [2024-10-16 07:12:02.840253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.593 [2024-10-16 07:12:02.840317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.593 [2024-10-16 07:12:02.840330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.593 [2024-10-16 07:12:02.840585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.593 [2024-10-16 07:12:02.840813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.593 [2024-10-16 07:12:02.840822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.593 [2024-10-16 07:12:02.840831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.593 [2024-10-16 07:12:02.844413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.593 [2024-10-16 07:12:02.853416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.593 [2024-10-16 07:12:02.854194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.593 [2024-10-16 07:12:02.854257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.593 [2024-10-16 07:12:02.854271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.593 [2024-10-16 07:12:02.854526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.593 [2024-10-16 07:12:02.854754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.593 [2024-10-16 07:12:02.854764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.593 [2024-10-16 07:12:02.854772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.593 [2024-10-16 07:12:02.858368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.593 [2024-10-16 07:12:02.867380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.593 [2024-10-16 07:12:02.867894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.593 [2024-10-16 07:12:02.867927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.593 [2024-10-16 07:12:02.867938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.593 [2024-10-16 07:12:02.868166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.593 [2024-10-16 07:12:02.868389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.593 [2024-10-16 07:12:02.868399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.593 [2024-10-16 07:12:02.868407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.593 [2024-10-16 07:12:02.871977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.593 [2024-10-16 07:12:02.881181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.593 [2024-10-16 07:12:02.881857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.593 [2024-10-16 07:12:02.881921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.593 [2024-10-16 07:12:02.881944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.593 [2024-10-16 07:12:02.882201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.593 [2024-10-16 07:12:02.882428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.593 [2024-10-16 07:12:02.882438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.593 [2024-10-16 07:12:02.882446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.593 [2024-10-16 07:12:02.886026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.593 [2024-10-16 07:12:02.895046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.593 [2024-10-16 07:12:02.895757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.593 [2024-10-16 07:12:02.895822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.593 [2024-10-16 07:12:02.895836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.593 [2024-10-16 07:12:02.896103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.593 [2024-10-16 07:12:02.896331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.593 [2024-10-16 07:12:02.896341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.593 [2024-10-16 07:12:02.896349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.593 [2024-10-16 07:12:02.899912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.593 [2024-10-16 07:12:02.908916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.593 [2024-10-16 07:12:02.909504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.593 [2024-10-16 07:12:02.909531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:02.909540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:02.909764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:02.909995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:02.910006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:02.910014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:02.913568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:02.922774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:02.923429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:02.923493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:02.923506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:02.923762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:02.924002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:02.924020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:02.924029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:02.927609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:02.936630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:02.937363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:02.937425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:02.937438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:02.937693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:02.937935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:02.937945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:02.937954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:02.941524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:02.950528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:02.951219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:02.951282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:02.951296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:02.951551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:02.951777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:02.951787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:02.951796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:02.955396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:02.964485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:02.965145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:02.965208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:02.965221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:02.965476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:02.965703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:02.965712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:02.965720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:02.969306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:02.978350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:02.978894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:02.978925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:02.978934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:02.979159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:02.979381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:02.979391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:02.979398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:02.982977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:02.992215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:02.992931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:02.992995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:02.993008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:02.993264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:02.993491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:02.993501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:02.993509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:02.997089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:03.006020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:03.006614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:03.006643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:03.006652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:03.006882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:03.007105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:03.007114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:03.007122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:03.010678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:03.019896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:03.020495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:03.020518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:03.020527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:03.020756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:03.020990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:03.021001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:03.021010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:03.024565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:03.033792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:03.035449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:03.035489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:03.035500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:03.035743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:03.035980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:03.035990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:03.035997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.594 [2024-10-16 07:12:03.039577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.594 [2024-10-16 07:12:03.047766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.594 [2024-10-16 07:12:03.048413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.594 [2024-10-16 07:12:03.048439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.594 [2024-10-16 07:12:03.048448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.594 [2024-10-16 07:12:03.048670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.594 [2024-10-16 07:12:03.048904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.594 [2024-10-16 07:12:03.048915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.594 [2024-10-16 07:12:03.048923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.595 [2024-10-16 07:12:03.052492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.595 [2024-10-16 07:12:03.061748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.595 [2024-10-16 07:12:03.062444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.595 [2024-10-16 07:12:03.062508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.595 [2024-10-16 07:12:03.062521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.595 [2024-10-16 07:12:03.062776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.595 [2024-10-16 07:12:03.063016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.595 [2024-10-16 07:12:03.063027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.595 [2024-10-16 07:12:03.063045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.595 [2024-10-16 07:12:03.066624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.595 [2024-10-16 07:12:03.075662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.595 [2024-10-16 07:12:03.076285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.595 [2024-10-16 07:12:03.076313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.595 [2024-10-16 07:12:03.076322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.595 [2024-10-16 07:12:03.076545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.595 [2024-10-16 07:12:03.076766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.595 [2024-10-16 07:12:03.076776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.595 [2024-10-16 07:12:03.076784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.595 [2024-10-16 07:12:03.080364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.595 [2024-10-16 07:12:03.089598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.595 [2024-10-16 07:12:03.090213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.595 [2024-10-16 07:12:03.090277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.595 [2024-10-16 07:12:03.090291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.595 [2024-10-16 07:12:03.090545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.595 [2024-10-16 07:12:03.090772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.595 [2024-10-16 07:12:03.090781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.595 [2024-10-16 07:12:03.090790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.857 [2024-10-16 07:12:03.094385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.857 [2024-10-16 07:12:03.103423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.857 [2024-10-16 07:12:03.104093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.857 [2024-10-16 07:12:03.104122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.857 [2024-10-16 07:12:03.104132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.857 [2024-10-16 07:12:03.104356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.857 [2024-10-16 07:12:03.104578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.857 [2024-10-16 07:12:03.104590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.857 [2024-10-16 07:12:03.104598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.857 [2024-10-16 07:12:03.108175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.857 [2024-10-16 07:12:03.117406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.857 [2024-10-16 07:12:03.118118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.857 [2024-10-16 07:12:03.118182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.857 [2024-10-16 07:12:03.118195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.857 [2024-10-16 07:12:03.118450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.857 [2024-10-16 07:12:03.118677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.857 [2024-10-16 07:12:03.118688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.857 [2024-10-16 07:12:03.118697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.857 [2024-10-16 07:12:03.122275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.857 [2024-10-16 07:12:03.131326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.857 [2024-10-16 07:12:03.131965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.857 [2024-10-16 07:12:03.131995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.857 [2024-10-16 07:12:03.132005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.857 [2024-10-16 07:12:03.132230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.857 [2024-10-16 07:12:03.132452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.857 [2024-10-16 07:12:03.132462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.857 [2024-10-16 07:12:03.132470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.857 [2024-10-16 07:12:03.136054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.857 [2024-10-16 07:12:03.145289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.145859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.145884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.145894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.146116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.146337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.146348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.146356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.149932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.159192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.159755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.159777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.159785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.160018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.160247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.160257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.160266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.163828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.173373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.174078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.174141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.174154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.174410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.174636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.174648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.174656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.178242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.187249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.187831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.187868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.187878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.188101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.188323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.188334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.188342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.191963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.201202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.201779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.201803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.201812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.202045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.202267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.202276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.202284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.205869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.215106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.215666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.215688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.215697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.215928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.216151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.216160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.216168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.219727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.228983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.229542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.229565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.229573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.229795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.230027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.230037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.230045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.233606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.242855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.243427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.243449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.243457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.243679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.243909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.243920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.243927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.247486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.256748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.257317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.257339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.257354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.257575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.257796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.257808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.257816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.261394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.270627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.271184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.271208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.271218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.271440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.271660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.271669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.271677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.858 [2024-10-16 07:12:03.275253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.858 [2024-10-16 07:12:03.284503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.858 [2024-10-16 07:12:03.285082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.858 [2024-10-16 07:12:03.285147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.858 [2024-10-16 07:12:03.285160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.858 [2024-10-16 07:12:03.285414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.858 [2024-10-16 07:12:03.285640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.858 [2024-10-16 07:12:03.285651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.858 [2024-10-16 07:12:03.285659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.859 [2024-10-16 07:12:03.289234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.859 [2024-10-16 07:12:03.298445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.859 [2024-10-16 07:12:03.299078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.859 [2024-10-16 07:12:03.299108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.859 [2024-10-16 07:12:03.299119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.859 [2024-10-16 07:12:03.299343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.859 [2024-10-16 07:12:03.299574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.859 [2024-10-16 07:12:03.299582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.859 [2024-10-16 07:12:03.299590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.859 [2024-10-16 07:12:03.303155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.859 [2024-10-16 07:12:03.312374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.859 [2024-10-16 07:12:03.313030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.859 [2024-10-16 07:12:03.313095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.859 [2024-10-16 07:12:03.313108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.859 [2024-10-16 07:12:03.313363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.859 [2024-10-16 07:12:03.313590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.859 [2024-10-16 07:12:03.313599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.859 [2024-10-16 07:12:03.313607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.859 [2024-10-16 07:12:03.317180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.859 [2024-10-16 07:12:03.326183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.859 [2024-10-16 07:12:03.326907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.859 [2024-10-16 07:12:03.326970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.859 [2024-10-16 07:12:03.326983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.859 [2024-10-16 07:12:03.327238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.859 [2024-10-16 07:12:03.327465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.859 [2024-10-16 07:12:03.327474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.859 [2024-10-16 07:12:03.327482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.859 [2024-10-16 07:12:03.331065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.859 [2024-10-16 07:12:03.340054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.859 [2024-10-16 07:12:03.340728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.859 [2024-10-16 07:12:03.340790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.859 [2024-10-16 07:12:03.340803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.859 [2024-10-16 07:12:03.341072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.859 [2024-10-16 07:12:03.341300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.859 [2024-10-16 07:12:03.341309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.859 [2024-10-16 07:12:03.341318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.859 [2024-10-16 07:12:03.344803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.859 [2024-10-16 07:12:03.352699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.859 [2024-10-16 07:12:03.353308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.859 [2024-10-16 07:12:03.353365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:03.859 [2024-10-16 07:12:03.353375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:03.859 [2024-10-16 07:12:03.353559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:03.859 [2024-10-16 07:12:03.353718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.859 [2024-10-16 07:12:03.353724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.859 [2024-10-16 07:12:03.353731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.121 [2024-10-16 07:12:03.356217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.121 [2024-10-16 07:12:03.365407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.121 [2024-10-16 07:12:03.365996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.121 [2024-10-16 07:12:03.366049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.121 [2024-10-16 07:12:03.366059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.121 [2024-10-16 07:12:03.366240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.121 [2024-10-16 07:12:03.366397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.121 [2024-10-16 07:12:03.366404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.121 [2024-10-16 07:12:03.366410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.121 [2024-10-16 07:12:03.368871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.121 [2024-10-16 07:12:03.378076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.121 [2024-10-16 07:12:03.378668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.121 [2024-10-16 07:12:03.378714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.121 [2024-10-16 07:12:03.378723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.121 [2024-10-16 07:12:03.378908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.121 [2024-10-16 07:12:03.379065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.121 [2024-10-16 07:12:03.379072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.121 [2024-10-16 07:12:03.379078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.121 [2024-10-16 07:12:03.381521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.390830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.391414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.391457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.391472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.391647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.391802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.391809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.122 [2024-10-16 07:12:03.391814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.122 [2024-10-16 07:12:03.394266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.403573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.404090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.404109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.404115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.404268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.404419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.404425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.122 [2024-10-16 07:12:03.404430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.122 [2024-10-16 07:12:03.406865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.416302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.416839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.416859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.416865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.417016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.417167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.417172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.122 [2024-10-16 07:12:03.417178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.122 [2024-10-16 07:12:03.419606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.428915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.429408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.429421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.429426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.429577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.429728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.429737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.122 [2024-10-16 07:12:03.429742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.122 [2024-10-16 07:12:03.432173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.441607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.442195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.442230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.442239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.442408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.442562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.442569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.122 [2024-10-16 07:12:03.442574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.122 [2024-10-16 07:12:03.445019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.454314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.454799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.454833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.454842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.455017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.455172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.455178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.122 [2024-10-16 07:12:03.455184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.122 [2024-10-16 07:12:03.457633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.466929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.467384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.467415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.467424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.467591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.467744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.467751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.122 [2024-10-16 07:12:03.467756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.122 [2024-10-16 07:12:03.470197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.479633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.480199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.480231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.480240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.480406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.480560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.480566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.122 [2024-10-16 07:12:03.480571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.122 [2024-10-16 07:12:03.483012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.492309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.492798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.492812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.492818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.492975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.493126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.493132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.122 [2024-10-16 07:12:03.493137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.122 [2024-10-16 07:12:03.495561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.122 [2024-10-16 07:12:03.504992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.122 [2024-10-16 07:12:03.505308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.122 [2024-10-16 07:12:03.505322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.122 [2024-10-16 07:12:03.505327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.122 [2024-10-16 07:12:03.505478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.122 [2024-10-16 07:12:03.505628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.122 [2024-10-16 07:12:03.505633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.505638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.123 [2024-10-16 07:12:03.508069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.123 [2024-10-16 07:12:03.517642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.123 [2024-10-16 07:12:03.518095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.123 [2024-10-16 07:12:03.518125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.123 [2024-10-16 07:12:03.518134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.123 [2024-10-16 07:12:03.518303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.123 [2024-10-16 07:12:03.518458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.123 [2024-10-16 07:12:03.518464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.518470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.123 [2024-10-16 07:12:03.520910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.123 [2024-10-16 07:12:03.530372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.123 [2024-10-16 07:12:03.530884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.123 [2024-10-16 07:12:03.530905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.123 [2024-10-16 07:12:03.530911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.123 [2024-10-16 07:12:03.531068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.123 [2024-10-16 07:12:03.531220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.123 [2024-10-16 07:12:03.531226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.531231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.123 [2024-10-16 07:12:03.533662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.123 [2024-10-16 07:12:03.543096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.123 [2024-10-16 07:12:03.543661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.123 [2024-10-16 07:12:03.543691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.123 [2024-10-16 07:12:03.543700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.123 [2024-10-16 07:12:03.543873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.123 [2024-10-16 07:12:03.544028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.123 [2024-10-16 07:12:03.544034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.544040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.123 [2024-10-16 07:12:03.546469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.123 [2024-10-16 07:12:03.555701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.123 [2024-10-16 07:12:03.556264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.123 [2024-10-16 07:12:03.556294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.123 [2024-10-16 07:12:03.556303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.123 [2024-10-16 07:12:03.556469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.123 [2024-10-16 07:12:03.556623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.123 [2024-10-16 07:12:03.556629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.556638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.123 [2024-10-16 07:12:03.559085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.123 [2024-10-16 07:12:03.568378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.123 [2024-10-16 07:12:03.568920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.123 [2024-10-16 07:12:03.568950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.123 [2024-10-16 07:12:03.568959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.123 [2024-10-16 07:12:03.569127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.123 [2024-10-16 07:12:03.569281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.123 [2024-10-16 07:12:03.569287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.569292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.123 [2024-10-16 07:12:03.571731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.123 [2024-10-16 07:12:03.581024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.123 [2024-10-16 07:12:03.581514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.123 [2024-10-16 07:12:03.581528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.123 [2024-10-16 07:12:03.581534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.123 [2024-10-16 07:12:03.581684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.123 [2024-10-16 07:12:03.581835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.123 [2024-10-16 07:12:03.581840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.581852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.123 [2024-10-16 07:12:03.584298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.123 [2024-10-16 07:12:03.593736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.123 [2024-10-16 07:12:03.594263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.123 [2024-10-16 07:12:03.594294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.123 [2024-10-16 07:12:03.594303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.123 [2024-10-16 07:12:03.594470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.123 [2024-10-16 07:12:03.594623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.123 [2024-10-16 07:12:03.594629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.594634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.123 [2024-10-16 07:12:03.597075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.123 [2024-10-16 07:12:03.606371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.123 [2024-10-16 07:12:03.606939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.123 [2024-10-16 07:12:03.606973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.123 [2024-10-16 07:12:03.606982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.123 [2024-10-16 07:12:03.607151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.123 [2024-10-16 07:12:03.607305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.123 [2024-10-16 07:12:03.607311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.607316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.123 [2024-10-16 07:12:03.609755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.123 [2024-10-16 07:12:03.619063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.123 [2024-10-16 07:12:03.619610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.123 [2024-10-16 07:12:03.619640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.123 [2024-10-16 07:12:03.619649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.123 [2024-10-16 07:12:03.619815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.123 [2024-10-16 07:12:03.619976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.123 [2024-10-16 07:12:03.619983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.123 [2024-10-16 07:12:03.619988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.622418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.631722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.632286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.632316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.632325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.386 [2024-10-16 07:12:03.632491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.386 [2024-10-16 07:12:03.632645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.386 [2024-10-16 07:12:03.632651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.386 [2024-10-16 07:12:03.632657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.635097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.644448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.644934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.644949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.644955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.386 [2024-10-16 07:12:03.645106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.386 [2024-10-16 07:12:03.645261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.386 [2024-10-16 07:12:03.645267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.386 [2024-10-16 07:12:03.645272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.647697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.657130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.657691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.657722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.657731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.386 [2024-10-16 07:12:03.657909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.386 [2024-10-16 07:12:03.658063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.386 [2024-10-16 07:12:03.658069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.386 [2024-10-16 07:12:03.658075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.660503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.669794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.670342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.670372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.670381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.386 [2024-10-16 07:12:03.670548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.386 [2024-10-16 07:12:03.670701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.386 [2024-10-16 07:12:03.670707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.386 [2024-10-16 07:12:03.670712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.673152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.682445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.683027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.683057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.683066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.386 [2024-10-16 07:12:03.683232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.386 [2024-10-16 07:12:03.683386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.386 [2024-10-16 07:12:03.683392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.386 [2024-10-16 07:12:03.683397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.685842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.695138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.695740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.695770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.695779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.386 [2024-10-16 07:12:03.695954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.386 [2024-10-16 07:12:03.696109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.386 [2024-10-16 07:12:03.696115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.386 [2024-10-16 07:12:03.696120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.698551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.707855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.708425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.708455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.708464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.386 [2024-10-16 07:12:03.708630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.386 [2024-10-16 07:12:03.708783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.386 [2024-10-16 07:12:03.708789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.386 [2024-10-16 07:12:03.708795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.711233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.720528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.721112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.721143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.721152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.386 [2024-10-16 07:12:03.721318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.386 [2024-10-16 07:12:03.721471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.386 [2024-10-16 07:12:03.721477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.386 [2024-10-16 07:12:03.721482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.723923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.733224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.733798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.733828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.733840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.386 [2024-10-16 07:12:03.734016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.386 [2024-10-16 07:12:03.734170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.386 [2024-10-16 07:12:03.734176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.386 [2024-10-16 07:12:03.734181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.386 [2024-10-16 07:12:03.736614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.386 [2024-10-16 07:12:03.745912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.386 [2024-10-16 07:12:03.746455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.386 [2024-10-16 07:12:03.746486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.386 [2024-10-16 07:12:03.746494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.387 [2024-10-16 07:12:03.746660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.387 [2024-10-16 07:12:03.746814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.387 [2024-10-16 07:12:03.746820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.387 [2024-10-16 07:12:03.746826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.387 [2024-10-16 07:12:03.749266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.387 [2024-10-16 07:12:03.758558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.387 [2024-10-16 07:12:03.759035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.387 [2024-10-16 07:12:03.759049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.387 [2024-10-16 07:12:03.759055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.387 [2024-10-16 07:12:03.759206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.387 [2024-10-16 07:12:03.759356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.387 [2024-10-16 07:12:03.759362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.387 [2024-10-16 07:12:03.759367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.387 [2024-10-16 07:12:03.761803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.387 [2024-10-16 07:12:03.771235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.387 [2024-10-16 07:12:03.771581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.387 [2024-10-16 07:12:03.771594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.387 [2024-10-16 07:12:03.771600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.387 [2024-10-16 07:12:03.771751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.387 [2024-10-16 07:12:03.771906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.387 [2024-10-16 07:12:03.771916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.387 [2024-10-16 07:12:03.771921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.387 [2024-10-16 07:12:03.774349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.387 [2024-10-16 07:12:03.783925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.387 [2024-10-16 07:12:03.784371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.387 [2024-10-16 07:12:03.784383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.387 [2024-10-16 07:12:03.784388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.387 [2024-10-16 07:12:03.784539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.387 [2024-10-16 07:12:03.784689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.387 [2024-10-16 07:12:03.784695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.387 [2024-10-16 07:12:03.784701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.387 [2024-10-16 07:12:03.787128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.387 [2024-10-16 07:12:03.796586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.387 [2024-10-16 07:12:03.797039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.387 [2024-10-16 07:12:03.797070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.387 [2024-10-16 07:12:03.797079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.387 [2024-10-16 07:12:03.797245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.387 [2024-10-16 07:12:03.797399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.387 [2024-10-16 07:12:03.797405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.387 [2024-10-16 07:12:03.797410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.387 [2024-10-16 07:12:03.799852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.387 [2024-10-16 07:12:03.809297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.387 [2024-10-16 07:12:03.809891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.387 [2024-10-16 07:12:03.809922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.387 [2024-10-16 07:12:03.809931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.387 [2024-10-16 07:12:03.810099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.387 [2024-10-16 07:12:03.810253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.387 [2024-10-16 07:12:03.810259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.387 [2024-10-16 07:12:03.810264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.387 [2024-10-16 07:12:03.812701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.387 7346.50 IOPS, 28.70 MiB/s [2024-10-16T05:12:03.886Z] [2024-10-16 07:12:03.823139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.387 [2024-10-16 07:12:03.823687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.387 [2024-10-16 07:12:03.823718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.387 [2024-10-16 07:12:03.823727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.387 [2024-10-16 07:12:03.823900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.387 [2024-10-16 07:12:03.824054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.387 [2024-10-16 07:12:03.824060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.387 [2024-10-16 07:12:03.824066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.387 [2024-10-16 07:12:03.826495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.387 [2024-10-16 07:12:03.835796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.387 [2024-10-16 07:12:03.836364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.387 [2024-10-16 07:12:03.836394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.387 [2024-10-16 07:12:03.836403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.387 [2024-10-16 07:12:03.836569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.387 [2024-10-16 07:12:03.836723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.387 [2024-10-16 07:12:03.836729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.387 [2024-10-16 07:12:03.836735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.387 [2024-10-16 07:12:03.839175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.387 [2024-10-16 07:12:03.848468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.387 [2024-10-16 07:12:03.849060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.387 [2024-10-16 07:12:03.849091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.387 [2024-10-16 07:12:03.849100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.387 [2024-10-16 07:12:03.849266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.388 [2024-10-16 07:12:03.849419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.388 [2024-10-16 07:12:03.849425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.388 [2024-10-16 07:12:03.849430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.388 [2024-10-16 07:12:03.851867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.388 [2024-10-16 07:12:03.861167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.388 [2024-10-16 07:12:03.861738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.388 [2024-10-16 07:12:03.861768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.388 [2024-10-16 07:12:03.861780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.388 [2024-10-16 07:12:03.861953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.388 [2024-10-16 07:12:03.862107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.388 [2024-10-16 07:12:03.862113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.388 [2024-10-16 07:12:03.862118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.388 [2024-10-16 07:12:03.864551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.388 [2024-10-16 07:12:03.873855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.388 [2024-10-16 07:12:03.874437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.388 [2024-10-16 07:12:03.874467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.388 [2024-10-16 07:12:03.874476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.388 [2024-10-16 07:12:03.874642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.388 [2024-10-16 07:12:03.874796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.388 [2024-10-16 07:12:03.874802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.388 [2024-10-16 07:12:03.874807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.388 [2024-10-16 07:12:03.877245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.651 [2024-10-16 07:12:03.886541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.651 [2024-10-16 07:12:03.887135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.651 [2024-10-16 07:12:03.887165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.651 [2024-10-16 07:12:03.887174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.651 [2024-10-16 07:12:03.887341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.651 [2024-10-16 07:12:03.887494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.651 [2024-10-16 07:12:03.887501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.651 [2024-10-16 07:12:03.887507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.651 [2024-10-16 07:12:03.889948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.651 [2024-10-16 07:12:03.899245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.651 [2024-10-16 07:12:03.899817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.651 [2024-10-16 07:12:03.899852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.651 [2024-10-16 07:12:03.899862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.651 [2024-10-16 07:12:03.900031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.651 [2024-10-16 07:12:03.900184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.651 [2024-10-16 07:12:03.900195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.651 [2024-10-16 07:12:03.900200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.651 [2024-10-16 07:12:03.902632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.651 [2024-10-16 07:12:03.911942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.651 [2024-10-16 07:12:03.912579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.651 [2024-10-16 07:12:03.912610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.651 [2024-10-16 07:12:03.912619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.651 [2024-10-16 07:12:03.912785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.651 [2024-10-16 07:12:03.912948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.651 [2024-10-16 07:12:03.912955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.651 [2024-10-16 07:12:03.912960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.651 [2024-10-16 07:12:03.915392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.651 [2024-10-16 07:12:03.924548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.651 [2024-10-16 07:12:03.925102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.651 [2024-10-16 07:12:03.925133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.651 [2024-10-16 07:12:03.925142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.651 [2024-10-16 07:12:03.925308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.651 [2024-10-16 07:12:03.925461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.651 [2024-10-16 07:12:03.925468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.651 [2024-10-16 07:12:03.925473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.651 [2024-10-16 07:12:03.927913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.651 [2024-10-16 07:12:03.937219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.651 [2024-10-16 07:12:03.937788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.651 [2024-10-16 07:12:03.937818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.651 [2024-10-16 07:12:03.937827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.651 [2024-10-16 07:12:03.938002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.651 [2024-10-16 07:12:03.938156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.651 [2024-10-16 07:12:03.938162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.651 [2024-10-16 07:12:03.938169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.651 [2024-10-16 07:12:03.940597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.651 [2024-10-16 07:12:03.949892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.651 [2024-10-16 07:12:03.950442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.651 [2024-10-16 07:12:03.950472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.651 [2024-10-16 07:12:03.950481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.651 [2024-10-16 07:12:03.950648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.651 [2024-10-16 07:12:03.950801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.651 [2024-10-16 07:12:03.950807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.651 [2024-10-16 07:12:03.950812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.651 [2024-10-16 07:12:03.953251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.651 [2024-10-16 07:12:03.962585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:03.963216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:03.963247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:03.963256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:03.963422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:03.963575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:03.963582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:03.963587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:03.966029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.652 [2024-10-16 07:12:03.975325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:03.975819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:03.975833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:03.975839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:03.975993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:03.976145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:03.976150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:03.976155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:03.978580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.652 [2024-10-16 07:12:03.988022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:03.988510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:03.988522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:03.988527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:03.988681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:03.988833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:03.988839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:03.988846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:03.991273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.652 [2024-10-16 07:12:04.000731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:04.001175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:04.001206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:04.001215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:04.001384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:04.001537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:04.001543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:04.001549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:04.003988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.652 [2024-10-16 07:12:04.013442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:04.014136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:04.014167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:04.014176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:04.014342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:04.014495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:04.014501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:04.014507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:04.016946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.652 [2024-10-16 07:12:04.026136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:04.026747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:04.026778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:04.026787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:04.026963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:04.027118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:04.027124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:04.027134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:04.029576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.652 [2024-10-16 07:12:04.038884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:04.039444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:04.039475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:04.039484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:04.039650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:04.039803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:04.039810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:04.039816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:04.042256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.652 [2024-10-16 07:12:04.051560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:04.052157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:04.052188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:04.052197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:04.052363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:04.052517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:04.052523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:04.052528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:04.054969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.652 [2024-10-16 07:12:04.064275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:04.064870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:04.064900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:04.064909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:04.065075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:04.065228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:04.065234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:04.065240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:04.067678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.652 [2024-10-16 07:12:04.076980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.652 [2024-10-16 07:12:04.077535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-10-16 07:12:04.077569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.652 [2024-10-16 07:12:04.077577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.652 [2024-10-16 07:12:04.077743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.652 [2024-10-16 07:12:04.077904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.652 [2024-10-16 07:12:04.077911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.652 [2024-10-16 07:12:04.077916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.652 [2024-10-16 07:12:04.080349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.653 [2024-10-16 07:12:04.089644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.653 [2024-10-16 07:12:04.090203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-10-16 07:12:04.090233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.653 [2024-10-16 07:12:04.090242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.653 [2024-10-16 07:12:04.090409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.653 [2024-10-16 07:12:04.090562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.653 [2024-10-16 07:12:04.090568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.653 [2024-10-16 07:12:04.090574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.653 [2024-10-16 07:12:04.093016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.653 [2024-10-16 07:12:04.102316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.653 [2024-10-16 07:12:04.102812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-10-16 07:12:04.102827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.653 [2024-10-16 07:12:04.102832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.653 [2024-10-16 07:12:04.102989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.653 [2024-10-16 07:12:04.103140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.653 [2024-10-16 07:12:04.103146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.653 [2024-10-16 07:12:04.103151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.653 [2024-10-16 07:12:04.105578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.653 [2024-10-16 07:12:04.115022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.653 [2024-10-16 07:12:04.115515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-10-16 07:12:04.115527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.653 [2024-10-16 07:12:04.115532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.653 [2024-10-16 07:12:04.115682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.653 [2024-10-16 07:12:04.115836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.653 [2024-10-16 07:12:04.115842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.653 [2024-10-16 07:12:04.115853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.653 [2024-10-16 07:12:04.118282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.653 [2024-10-16 07:12:04.127718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.653 [2024-10-16 07:12:04.128183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-10-16 07:12:04.128195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.653 [2024-10-16 07:12:04.128200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.653 [2024-10-16 07:12:04.128350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.653 [2024-10-16 07:12:04.128500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.653 [2024-10-16 07:12:04.128505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.653 [2024-10-16 07:12:04.128510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.653 [2024-10-16 07:12:04.130947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.653 [2024-10-16 07:12:04.140399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.653 [2024-10-16 07:12:04.140851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-10-16 07:12:04.140863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.653 [2024-10-16 07:12:04.140868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.653 [2024-10-16 07:12:04.141019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.653 [2024-10-16 07:12:04.141169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.653 [2024-10-16 07:12:04.141175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.653 [2024-10-16 07:12:04.141180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.653 [2024-10-16 07:12:04.143607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.916 [2024-10-16 07:12:04.153075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.916 [2024-10-16 07:12:04.153639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.916 [2024-10-16 07:12:04.153670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.916 [2024-10-16 07:12:04.153679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.916 [2024-10-16 07:12:04.153854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.916 [2024-10-16 07:12:04.154009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.916 [2024-10-16 07:12:04.154015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.916 [2024-10-16 07:12:04.154020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.916 [2024-10-16 07:12:04.156457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.916 [2024-10-16 07:12:04.165762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.916 [2024-10-16 07:12:04.166314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.916 [2024-10-16 07:12:04.166345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.916 [2024-10-16 07:12:04.166354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.916 [2024-10-16 07:12:04.166520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.916 [2024-10-16 07:12:04.166674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.916 [2024-10-16 07:12:04.166680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.916 [2024-10-16 07:12:04.166685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.916 [2024-10-16 07:12:04.169126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.916 [2024-10-16 07:12:04.178429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.916 [2024-10-16 07:12:04.178949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.916 [2024-10-16 07:12:04.178980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.916 [2024-10-16 07:12:04.178989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.916 [2024-10-16 07:12:04.179157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.916 [2024-10-16 07:12:04.179311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.916 [2024-10-16 07:12:04.179317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.916 [2024-10-16 07:12:04.179323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.916 [2024-10-16 07:12:04.181762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.916 [2024-10-16 07:12:04.191070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.916 [2024-10-16 07:12:04.191548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.916 [2024-10-16 07:12:04.191577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.916 [2024-10-16 07:12:04.191586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.916 [2024-10-16 07:12:04.191754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.916 [2024-10-16 07:12:04.191919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.916 [2024-10-16 07:12:04.191927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.916 [2024-10-16 07:12:04.191933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.916 [2024-10-16 07:12:04.194365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.916 [2024-10-16 07:12:04.203805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.916 [2024-10-16 07:12:04.204263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.916 [2024-10-16 07:12:04.204277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.916 [2024-10-16 07:12:04.204286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.916 [2024-10-16 07:12:04.204437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.916 [2024-10-16 07:12:04.204588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.916 [2024-10-16 07:12:04.204594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.916 [2024-10-16 07:12:04.204599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.916 [2024-10-16 07:12:04.207030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.916 [2024-10-16 07:12:04.216521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.916 [2024-10-16 07:12:04.217154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.916 [2024-10-16 07:12:04.217184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.916 [2024-10-16 07:12:04.217193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.916 [2024-10-16 07:12:04.217359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.916 [2024-10-16 07:12:04.217513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.916 [2024-10-16 07:12:04.217519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.916 [2024-10-16 07:12:04.217524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.916 [2024-10-16 07:12:04.219965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.916 [2024-10-16 07:12:04.229268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.916 [2024-10-16 07:12:04.229832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.916 [2024-10-16 07:12:04.229869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.916 [2024-10-16 07:12:04.229878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.916 [2024-10-16 07:12:04.230046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.916 [2024-10-16 07:12:04.230199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.916 [2024-10-16 07:12:04.230206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.916 [2024-10-16 07:12:04.230211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.916 [2024-10-16 07:12:04.232646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.916 [2024-10-16 07:12:04.241952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.916 [2024-10-16 07:12:04.242509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.242539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.242548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.242714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.242873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.242883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.917 [2024-10-16 07:12:04.242889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.917 [2024-10-16 07:12:04.245325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.917 [2024-10-16 07:12:04.254624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.917 [2024-10-16 07:12:04.255215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.255246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.255255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.255421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.255575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.255581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.917 [2024-10-16 07:12:04.255586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.917 [2024-10-16 07:12:04.258026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.917 [2024-10-16 07:12:04.267328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.917 [2024-10-16 07:12:04.267919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.267950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.267959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.268128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.268281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.268287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.917 [2024-10-16 07:12:04.268293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.917 [2024-10-16 07:12:04.270734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.917 [2024-10-16 07:12:04.280042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.917 [2024-10-16 07:12:04.280627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.280658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.280666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.280832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.280992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.281000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.917 [2024-10-16 07:12:04.281005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.917 [2024-10-16 07:12:04.283437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.917 [2024-10-16 07:12:04.292742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.917 [2024-10-16 07:12:04.293387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.293418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.293427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.293594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.293748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.293754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.917 [2024-10-16 07:12:04.293759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.917 [2024-10-16 07:12:04.296199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.917 [2024-10-16 07:12:04.305360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.917 [2024-10-16 07:12:04.305858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.305874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.305880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.306031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.306181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.306187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.917 [2024-10-16 07:12:04.306192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.917 [2024-10-16 07:12:04.308616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.917 [2024-10-16 07:12:04.318070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.917 [2024-10-16 07:12:04.318636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.318666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.318675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.318841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.319004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.319010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.917 [2024-10-16 07:12:04.319016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.917 [2024-10-16 07:12:04.321447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.917 [2024-10-16 07:12:04.330760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.917 [2024-10-16 07:12:04.331314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.331345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.331359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.331526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.331679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.331685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.917 [2024-10-16 07:12:04.331690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.917 [2024-10-16 07:12:04.334135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.917 [2024-10-16 07:12:04.343439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.917 [2024-10-16 07:12:04.343934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.343965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.343974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.344143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.344296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.344302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.917 [2024-10-16 07:12:04.344308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.917 [2024-10-16 07:12:04.346743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.917 [2024-10-16 07:12:04.356048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.917 [2024-10-16 07:12:04.356660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.917 [2024-10-16 07:12:04.356690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.917 [2024-10-16 07:12:04.356699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.917 [2024-10-16 07:12:04.356873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.917 [2024-10-16 07:12:04.357026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.917 [2024-10-16 07:12:04.357033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.918 [2024-10-16 07:12:04.357038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.918 [2024-10-16 07:12:04.359469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.918 [2024-10-16 07:12:04.368784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.918 [2024-10-16 07:12:04.369332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.918 [2024-10-16 07:12:04.369363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.918 [2024-10-16 07:12:04.369372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.918 [2024-10-16 07:12:04.369538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.918 [2024-10-16 07:12:04.369692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.918 [2024-10-16 07:12:04.369701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.918 [2024-10-16 07:12:04.369707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.918 [2024-10-16 07:12:04.372145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.918 [2024-10-16 07:12:04.381460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.918 [2024-10-16 07:12:04.382061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.918 [2024-10-16 07:12:04.382092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.918 [2024-10-16 07:12:04.382101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.918 [2024-10-16 07:12:04.382268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.918 [2024-10-16 07:12:04.382421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.918 [2024-10-16 07:12:04.382428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.918 [2024-10-16 07:12:04.382433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.918 [2024-10-16 07:12:04.384871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.918 [2024-10-16 07:12:04.394177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.918 [2024-10-16 07:12:04.394662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.918 [2024-10-16 07:12:04.394676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.918 [2024-10-16 07:12:04.394682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.918 [2024-10-16 07:12:04.394833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.918 [2024-10-16 07:12:04.394989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.918 [2024-10-16 07:12:04.394995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.918 [2024-10-16 07:12:04.395000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.918 [2024-10-16 07:12:04.397424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.918 [2024-10-16 07:12:04.406789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.918 [2024-10-16 07:12:04.407274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.918 [2024-10-16 07:12:04.407286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:04.918 [2024-10-16 07:12:04.407291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:04.918 [2024-10-16 07:12:04.407442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:04.918 [2024-10-16 07:12:04.407592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.918 [2024-10-16 07:12:04.407598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.918 [2024-10-16 07:12:04.407603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.918 [2024-10-16 07:12:04.410041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.181 [2024-10-16 07:12:04.419507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.181 [2024-10-16 07:12:04.419959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.181 [2024-10-16 07:12:04.419972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.181 [2024-10-16 07:12:04.419978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.181 [2024-10-16 07:12:04.420129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.181 [2024-10-16 07:12:04.420279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.181 [2024-10-16 07:12:04.420285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.181 [2024-10-16 07:12:04.420290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.181 [2024-10-16 07:12:04.422716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.181 [2024-10-16 07:12:04.432178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.181 [2024-10-16 07:12:04.432726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.181 [2024-10-16 07:12:04.432757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.181 [2024-10-16 07:12:04.432766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.181 [2024-10-16 07:12:04.432939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.181 [2024-10-16 07:12:04.433093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.181 [2024-10-16 07:12:04.433099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.181 [2024-10-16 07:12:04.433105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.181 [2024-10-16 07:12:04.435536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.181 [2024-10-16 07:12:04.444849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.181 [2024-10-16 07:12:04.445376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.181 [2024-10-16 07:12:04.445407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.181 [2024-10-16 07:12:04.445416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.181 [2024-10-16 07:12:04.445582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.181 [2024-10-16 07:12:04.445735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.181 [2024-10-16 07:12:04.445742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.181 [2024-10-16 07:12:04.445747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.181 [2024-10-16 07:12:04.448185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.181 [2024-10-16 07:12:04.457489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.181 [2024-10-16 07:12:04.458058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.181 [2024-10-16 07:12:04.458089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.181 [2024-10-16 07:12:04.458098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.181 [2024-10-16 07:12:04.458268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.181 [2024-10-16 07:12:04.458421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.181 [2024-10-16 07:12:04.458427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.458433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.460874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.470189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.470776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.470806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.470815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.470991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.471145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.471151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.471156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.473586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.482888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.483369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.483383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.483389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.483539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.483690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.483696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.483701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.486132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.495574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.495974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.496005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.496014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.496182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.496336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.496342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.496350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.498787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.508237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.508714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.508728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.508734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.508890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.509041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.509047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.509052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.511477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.520924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.521462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.521492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.521501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.521667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.521821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.521828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.521834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.524269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.533587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.534051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.534066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.534072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.534223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.534374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.534379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.534384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.536848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.546292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.546743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.546759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.546765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.546921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.547073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.547078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.547083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.549510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.558952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.559514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.559545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.559554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.559721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.559880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.559887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.559892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.562323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.571641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.572115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.572145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.572154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.572323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.572476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.572483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.572488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.574932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.584301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.584839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.584876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.584884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.182 [2024-10-16 07:12:04.585050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.182 [2024-10-16 07:12:04.585207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.182 [2024-10-16 07:12:04.585213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.182 [2024-10-16 07:12:04.585219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.182 [2024-10-16 07:12:04.587652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.182 [2024-10-16 07:12:04.596961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.182 [2024-10-16 07:12:04.597437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.182 [2024-10-16 07:12:04.597451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.182 [2024-10-16 07:12:04.597457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.183 [2024-10-16 07:12:04.597608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.183 [2024-10-16 07:12:04.597759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.183 [2024-10-16 07:12:04.597764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.183 [2024-10-16 07:12:04.597769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.183 [2024-10-16 07:12:04.600205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.183 [2024-10-16 07:12:04.609649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.183 [2024-10-16 07:12:04.610269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.183 [2024-10-16 07:12:04.610299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.183 [2024-10-16 07:12:04.610308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.183 [2024-10-16 07:12:04.610474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.183 [2024-10-16 07:12:04.610627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.183 [2024-10-16 07:12:04.610633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.183 [2024-10-16 07:12:04.610639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.183 [2024-10-16 07:12:04.613079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.183 [2024-10-16 07:12:04.622388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.183 [2024-10-16 07:12:04.622886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.183 [2024-10-16 07:12:04.622903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.183 [2024-10-16 07:12:04.622909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.183 [2024-10-16 07:12:04.623061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.183 [2024-10-16 07:12:04.623212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.183 [2024-10-16 07:12:04.623217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.183 [2024-10-16 07:12:04.623223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.183 [2024-10-16 07:12:04.625656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.183 [2024-10-16 07:12:04.635116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.183 [2024-10-16 07:12:04.635707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.183 [2024-10-16 07:12:04.635738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.183 [2024-10-16 07:12:04.635746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.183 [2024-10-16 07:12:04.635920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.183 [2024-10-16 07:12:04.636074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.183 [2024-10-16 07:12:04.636080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.183 [2024-10-16 07:12:04.636085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.183 [2024-10-16 07:12:04.638521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.183 [2024-10-16 07:12:04.647826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.183 [2024-10-16 07:12:04.648442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.183 [2024-10-16 07:12:04.648472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.183 [2024-10-16 07:12:04.648481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.183 [2024-10-16 07:12:04.648648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.183 [2024-10-16 07:12:04.648801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.183 [2024-10-16 07:12:04.648808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.183 [2024-10-16 07:12:04.648813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.183 [2024-10-16 07:12:04.651250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.183 [2024-10-16 07:12:04.660554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.183 [2024-10-16 07:12:04.661130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.183 [2024-10-16 07:12:04.661161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.183 [2024-10-16 07:12:04.661170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.183 [2024-10-16 07:12:04.661336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.183 [2024-10-16 07:12:04.661490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.183 [2024-10-16 07:12:04.661496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.183 [2024-10-16 07:12:04.661502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.183 [2024-10-16 07:12:04.663949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.183 [2024-10-16 07:12:04.673255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.183 [2024-10-16 07:12:04.673740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.183 [2024-10-16 07:12:04.673755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.183 [2024-10-16 07:12:04.673764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.183 [2024-10-16 07:12:04.673920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.183 [2024-10-16 07:12:04.674071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.183 [2024-10-16 07:12:04.674077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.183 [2024-10-16 07:12:04.674082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.183 [2024-10-16 07:12:04.676512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.446 [2024-10-16 07:12:04.685962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.446 [2024-10-16 07:12:04.686529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.446 [2024-10-16 07:12:04.686559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.446 [2024-10-16 07:12:04.686568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.446 [2024-10-16 07:12:04.686737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.446 [2024-10-16 07:12:04.686897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.446 [2024-10-16 07:12:04.686904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.446 [2024-10-16 07:12:04.686909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.446 [2024-10-16 07:12:04.689340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.446 [2024-10-16 07:12:04.698637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.446 [2024-10-16 07:12:04.698994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.446 [2024-10-16 07:12:04.699009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.446 [2024-10-16 07:12:04.699015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.446 [2024-10-16 07:12:04.699166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.446 [2024-10-16 07:12:04.699316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.446 [2024-10-16 07:12:04.699322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.446 [2024-10-16 07:12:04.699327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.446 [2024-10-16 07:12:04.701755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.446 [2024-10-16 07:12:04.711347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.446 [2024-10-16 07:12:04.711805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.446 [2024-10-16 07:12:04.711816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.446 [2024-10-16 07:12:04.711822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.446 [2024-10-16 07:12:04.711977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.446 [2024-10-16 07:12:04.712128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.446 [2024-10-16 07:12:04.712137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.446 [2024-10-16 07:12:04.712142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.446 [2024-10-16 07:12:04.714567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.446 [2024-10-16 07:12:04.724007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.446 [2024-10-16 07:12:04.724541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.446 [2024-10-16 07:12:04.724571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.724580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.724746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.724906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.724913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.724919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.727349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 [2024-10-16 07:12:04.736673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.447 [2024-10-16 07:12:04.737235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.447 [2024-10-16 07:12:04.737266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.737275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.737442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.737595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.737601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.737606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.740047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 [2024-10-16 07:12:04.749358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.447 [2024-10-16 07:12:04.749957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.447 [2024-10-16 07:12:04.749988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.749997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.750166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.750320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.750326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.750332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.752773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 [2024-10-16 07:12:04.762078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.447 [2024-10-16 07:12:04.762636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.447 [2024-10-16 07:12:04.762666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.762675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.762842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.763009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.763015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.763020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.765452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 [2024-10-16 07:12:04.774758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.447 [2024-10-16 07:12:04.775327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.447 [2024-10-16 07:12:04.775358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.775367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.775533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.775686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.775693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.775698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.778136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 [2024-10-16 07:12:04.787438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.447 [2024-10-16 07:12:04.787954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.447 [2024-10-16 07:12:04.787985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.787994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.788162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.788316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.788322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.788328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.790767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 [2024-10-16 07:12:04.800080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.447 [2024-10-16 07:12:04.800545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.447 [2024-10-16 07:12:04.800560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.800565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.800720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.800875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.800882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.800888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.803316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 [2024-10-16 07:12:04.812756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.447 [2024-10-16 07:12:04.813169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.447 [2024-10-16 07:12:04.813182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.813187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.813338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.813488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.813493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.813499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.815928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 5877.20 IOPS, 22.96 MiB/s [2024-10-16T05:12:04.946Z] [2024-10-16 07:12:04.826504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.447 [2024-10-16 07:12:04.826919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.447 [2024-10-16 07:12:04.826933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.826938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.827088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.827239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.827244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.827249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.829675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 [2024-10-16 07:12:04.839154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.447 [2024-10-16 07:12:04.839611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.447 [2024-10-16 07:12:04.839624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.447 [2024-10-16 07:12:04.839630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.447 [2024-10-16 07:12:04.839780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.447 [2024-10-16 07:12:04.839937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.447 [2024-10-16 07:12:04.839947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.447 [2024-10-16 07:12:04.839952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.447 [2024-10-16 07:12:04.842379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.447 [2024-10-16 07:12:04.851822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.448 [2024-10-16 07:12:04.852275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.448 [2024-10-16 07:12:04.852287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.448 [2024-10-16 07:12:04.852292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.448 [2024-10-16 07:12:04.852442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.448 [2024-10-16 07:12:04.852592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.448 [2024-10-16 07:12:04.852598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.448 [2024-10-16 07:12:04.852603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.448 [2024-10-16 07:12:04.855032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.448 [2024-10-16 07:12:04.864482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.448 [2024-10-16 07:12:04.864971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.448 [2024-10-16 07:12:04.865002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.448 [2024-10-16 07:12:04.865011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.448 [2024-10-16 07:12:04.865179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.448 [2024-10-16 07:12:04.865333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.448 [2024-10-16 07:12:04.865339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.448 [2024-10-16 07:12:04.865344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.448 [2024-10-16 07:12:04.867788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.448 [2024-10-16 07:12:04.877097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.448 [2024-10-16 07:12:04.877584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.448 [2024-10-16 07:12:04.877598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.448 [2024-10-16 07:12:04.877605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.448 [2024-10-16 07:12:04.877756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.448 [2024-10-16 07:12:04.877912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.448 [2024-10-16 07:12:04.877919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.448 [2024-10-16 07:12:04.877925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.448 [2024-10-16 07:12:04.880351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.448 [2024-10-16 07:12:04.889787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.448 [2024-10-16 07:12:04.890367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.448 [2024-10-16 07:12:04.890397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.448 [2024-10-16 07:12:04.890406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.448 [2024-10-16 07:12:04.890575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.448 [2024-10-16 07:12:04.890729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.448 [2024-10-16 07:12:04.890735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.448 [2024-10-16 07:12:04.890740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.448 [2024-10-16 07:12:04.893175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.448 [2024-10-16 07:12:04.902473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.448 [2024-10-16 07:12:04.902952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.448 [2024-10-16 07:12:04.902982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.448 [2024-10-16 07:12:04.902991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.448 [2024-10-16 07:12:04.903160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.448 [2024-10-16 07:12:04.903313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.448 [2024-10-16 07:12:04.903319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.448 [2024-10-16 07:12:04.903325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.448 [2024-10-16 07:12:04.905763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.448 [2024-10-16 07:12:04.915210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.448 [2024-10-16 07:12:04.915741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.448 [2024-10-16 07:12:04.915771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.448 [2024-10-16 07:12:04.915781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.448 [2024-10-16 07:12:04.915953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.448 [2024-10-16 07:12:04.916107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.448 [2024-10-16 07:12:04.916113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.448 [2024-10-16 07:12:04.916118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.448 [2024-10-16 07:12:04.918549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.448 [2024-10-16 07:12:04.927854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.448 [2024-10-16 07:12:04.928452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.448 [2024-10-16 07:12:04.928482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.448 [2024-10-16 07:12:04.928491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.448 [2024-10-16 07:12:04.928662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.448 [2024-10-16 07:12:04.928816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.448 [2024-10-16 07:12:04.928822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.448 [2024-10-16 07:12:04.928827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.448 [2024-10-16 07:12:04.931277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.448 [2024-10-16 07:12:04.940583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.448 [2024-10-16 07:12:04.940950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.448 [2024-10-16 07:12:04.940966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.448 [2024-10-16 07:12:04.940972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.448 [2024-10-16 07:12:04.941124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.448 [2024-10-16 07:12:04.941275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.448 [2024-10-16 07:12:04.941281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.448 [2024-10-16 07:12:04.941286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.448 [2024-10-16 07:12:04.943713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.711 [2024-10-16 07:12:04.953303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.711 [2024-10-16 07:12:04.953758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.711 [2024-10-16 07:12:04.953770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.711 [2024-10-16 07:12:04.953776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.711 [2024-10-16 07:12:04.953930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.711 [2024-10-16 07:12:04.954081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.711 [2024-10-16 07:12:04.954087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.711 [2024-10-16 07:12:04.954092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.711 [2024-10-16 07:12:04.956518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.711 [2024-10-16 07:12:04.965966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.711 [2024-10-16 07:12:04.966438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.711 [2024-10-16 07:12:04.966449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.711 [2024-10-16 07:12:04.966455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.711 [2024-10-16 07:12:04.966605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.711 [2024-10-16 07:12:04.966755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.711 [2024-10-16 07:12:04.966762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.711 [2024-10-16 07:12:04.966773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.711 [2024-10-16 07:12:04.969203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.711 [2024-10-16 07:12:04.978648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.711 [2024-10-16 07:12:04.979096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.711 [2024-10-16 07:12:04.979107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.711 [2024-10-16 07:12:04.979113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.711 [2024-10-16 07:12:04.979263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.711 [2024-10-16 07:12:04.979413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.711 [2024-10-16 07:12:04.979419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.711 [2024-10-16 07:12:04.979424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.711 [2024-10-16 07:12:04.981850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.711 [2024-10-16 07:12:04.991370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.711 [2024-10-16 07:12:04.991825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.711 [2024-10-16 07:12:04.991837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.711 [2024-10-16 07:12:04.991842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.711 [2024-10-16 07:12:04.991997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.711 [2024-10-16 07:12:04.992147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.711 [2024-10-16 07:12:04.992153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.711 [2024-10-16 07:12:04.992158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:04.994588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.004029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.004516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.004527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.004533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.004683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.004833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.004839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.004848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:05.007271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.016707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.017135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.017150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.017155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.017305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.017456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.017461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.017466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:05.019894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.029330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.029874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.029905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.029914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.030082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.030235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.030241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.030247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:05.032691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.042025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.042613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.042644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.042653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.042820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.042980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.042988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.042994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:05.045427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.054734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.055290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.055321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.055330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.055497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.055654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.055661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.055667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:05.058104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.067412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.067965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.067996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.068005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.068173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.068326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.068332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.068338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:05.070774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.080073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.080563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.080577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.080583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.080734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.080890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.080897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.080901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:05.083328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.092766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.093212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.093242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.093251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.093417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.093571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.093577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.093582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:05.096026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.105464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.106051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.106081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.106090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.106256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.106409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.106415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.106421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.712 [2024-10-16 07:12:05.108860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.712 [2024-10-16 07:12:05.118160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.712 [2024-10-16 07:12:05.118724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.712 [2024-10-16 07:12:05.118755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.712 [2024-10-16 07:12:05.118764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.712 [2024-10-16 07:12:05.118937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.712 [2024-10-16 07:12:05.119091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.712 [2024-10-16 07:12:05.119097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.712 [2024-10-16 07:12:05.119102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-10-16 07:12:05.121537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-10-16 07:12:05.130839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-10-16 07:12:05.131413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-10-16 07:12:05.131443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-10-16 07:12:05.131452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.713 [2024-10-16 07:12:05.131618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.713 [2024-10-16 07:12:05.131780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-10-16 07:12:05.131787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-10-16 07:12:05.131792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-10-16 07:12:05.134234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-10-16 07:12:05.143534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-10-16 07:12:05.144146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-10-16 07:12:05.144177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-10-16 07:12:05.144189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.713 [2024-10-16 07:12:05.144355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.713 [2024-10-16 07:12:05.144508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-10-16 07:12:05.144515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-10-16 07:12:05.144520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-10-16 07:12:05.146962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-10-16 07:12:05.156262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-10-16 07:12:05.156834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-10-16 07:12:05.156870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-10-16 07:12:05.156880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.713 [2024-10-16 07:12:05.157048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.713 [2024-10-16 07:12:05.157201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-10-16 07:12:05.157207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-10-16 07:12:05.157213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-10-16 07:12:05.159646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-10-16 07:12:05.168954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-10-16 07:12:05.169467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-10-16 07:12:05.169497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-10-16 07:12:05.169506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.713 [2024-10-16 07:12:05.169672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.713 [2024-10-16 07:12:05.169826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-10-16 07:12:05.169832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-10-16 07:12:05.169838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-10-16 07:12:05.172281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-10-16 07:12:05.181630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-10-16 07:12:05.182119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-10-16 07:12:05.182149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-10-16 07:12:05.182158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.713 [2024-10-16 07:12:05.182324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.713 [2024-10-16 07:12:05.182477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-10-16 07:12:05.182486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-10-16 07:12:05.182492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-10-16 07:12:05.184930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-10-16 07:12:05.194369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-10-16 07:12:05.194866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-10-16 07:12:05.194881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-10-16 07:12:05.194887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.713 [2024-10-16 07:12:05.195038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.713 [2024-10-16 07:12:05.195188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-10-16 07:12:05.195194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-10-16 07:12:05.195199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-10-16 07:12:05.197626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-10-16 07:12:05.207063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-10-16 07:12:05.207598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-10-16 07:12:05.207628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-10-16 07:12:05.207637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.713 [2024-10-16 07:12:05.207804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.713 [2024-10-16 07:12:05.207965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-10-16 07:12:05.207972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-10-16 07:12:05.207978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-10-16 07:12:05.210409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-10-16 07:12:05.219710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-10-16 07:12:05.220263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-10-16 07:12:05.220293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-10-16 07:12:05.220303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.976 [2024-10-16 07:12:05.220469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.976 [2024-10-16 07:12:05.220622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-10-16 07:12:05.220628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-10-16 07:12:05.220634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-10-16 07:12:05.223076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-10-16 07:12:05.232386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-10-16 07:12:05.232945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-10-16 07:12:05.232976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-10-16 07:12:05.232984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.976 [2024-10-16 07:12:05.233153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.976 [2024-10-16 07:12:05.233307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-10-16 07:12:05.233313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-10-16 07:12:05.233319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-10-16 07:12:05.235759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-10-16 07:12:05.245060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-10-16 07:12:05.245546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-10-16 07:12:05.245560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-10-16 07:12:05.245566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.976 [2024-10-16 07:12:05.245717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.976 [2024-10-16 07:12:05.245874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-10-16 07:12:05.245880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-10-16 07:12:05.245885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-10-16 07:12:05.248333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-10-16 07:12:05.257778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-10-16 07:12:05.258334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-10-16 07:12:05.258364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-10-16 07:12:05.258373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.976 [2024-10-16 07:12:05.258539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.976 [2024-10-16 07:12:05.258692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-10-16 07:12:05.258699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-10-16 07:12:05.258704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-10-16 07:12:05.261145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-10-16 07:12:05.270457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-10-16 07:12:05.270971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-10-16 07:12:05.271002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-10-16 07:12:05.271011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.976 [2024-10-16 07:12:05.271182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.976 [2024-10-16 07:12:05.271336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-10-16 07:12:05.271342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-10-16 07:12:05.271348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-10-16 07:12:05.273787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.283087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.283659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.283689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.283698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.283872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.284026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.284033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.284038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.286469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.295761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.296220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.296234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.296240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.296392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.296542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.296548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.296553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.298983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.308432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.309021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.309052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.309061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.309227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.309380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.309387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.309396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.311832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.321130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.321623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.321637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.321643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.321793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.321951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.321957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.321962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.324391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.333828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.334376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.334406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.334415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.334583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.334736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.334742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.334748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.337189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.346482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.347110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.347141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.347150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.347316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.347469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.347476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.347481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.349922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.359226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.359826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.359864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.359874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.360043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.360196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.360202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.360207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.362639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.371948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.372496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.372527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.372536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.372702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.372864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.372871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.372876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.375307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.384604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.385165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.385195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.385204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.385371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.385524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.385530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.385536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.387977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.397274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.397800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.397831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.397840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.398015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.398169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.398175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-10-16 07:12:05.398180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-10-16 07:12:05.400614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-10-16 07:12:05.409922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-10-16 07:12:05.410536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-10-16 07:12:05.410566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-10-16 07:12:05.410575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.977 [2024-10-16 07:12:05.410741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.977 [2024-10-16 07:12:05.410902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-10-16 07:12:05.410909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-10-16 07:12:05.410914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-10-16 07:12:05.413345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-10-16 07:12:05.422646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-10-16 07:12:05.423202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-10-16 07:12:05.423233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-10-16 07:12:05.423242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.978 [2024-10-16 07:12:05.423408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.978 [2024-10-16 07:12:05.423562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-10-16 07:12:05.423568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-10-16 07:12:05.423574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-10-16 07:12:05.426014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-10-16 07:12:05.435329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-10-16 07:12:05.435850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-10-16 07:12:05.435865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-10-16 07:12:05.435871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.978 [2024-10-16 07:12:05.436022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.978 [2024-10-16 07:12:05.436172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-10-16 07:12:05.436178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-10-16 07:12:05.436186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-10-16 07:12:05.438616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-10-16 07:12:05.448060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-10-16 07:12:05.448642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-10-16 07:12:05.448673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-10-16 07:12:05.448681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.978 [2024-10-16 07:12:05.448855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.978 [2024-10-16 07:12:05.449009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-10-16 07:12:05.449015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-10-16 07:12:05.449021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-10-16 07:12:05.451451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-10-16 07:12:05.460776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-10-16 07:12:05.461294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-10-16 07:12:05.461324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-10-16 07:12:05.461333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.978 [2024-10-16 07:12:05.461499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:05.978 [2024-10-16 07:12:05.461652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-10-16 07:12:05.461658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-10-16 07:12:05.461663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-10-16 07:12:05.464106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-10-16 07:12:05.473432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-10-16 07:12:05.474071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-10-16 07:12:05.474102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-10-16 07:12:05.474110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:05.978 [2024-10-16 07:12:05.474277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.241 [2024-10-16 07:12:05.474430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.241 [2024-10-16 07:12:05.474438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.241 [2024-10-16 07:12:05.474443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.241 [2024-10-16 07:12:05.476882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.241 [2024-10-16 07:12:05.486039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.241 [2024-10-16 07:12:05.486604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.241 [2024-10-16 07:12:05.486638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.241 [2024-10-16 07:12:05.486647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.241 [2024-10-16 07:12:05.486813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.241 [2024-10-16 07:12:05.486974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.241 [2024-10-16 07:12:05.486981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.241 [2024-10-16 07:12:05.486987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.241 [2024-10-16 07:12:05.489418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.241 [2024-10-16 07:12:05.498742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.241 [2024-10-16 07:12:05.499334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.241 [2024-10-16 07:12:05.499364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.241 [2024-10-16 07:12:05.499373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.241 [2024-10-16 07:12:05.499539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.241 [2024-10-16 07:12:05.499692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.241 [2024-10-16 07:12:05.499699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.241 [2024-10-16 07:12:05.499704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.241 [2024-10-16 07:12:05.502146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3304411 Killed "${NVMF_APP[@]}" "$@" 00:29:06.241 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:06.241 [2024-10-16 07:12:05.511447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.241 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:06.241 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:06.241 [2024-10-16 07:12:05.511961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.241 [2024-10-16 07:12:05.511991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.241 [2024-10-16 07:12:05.512000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.241 [2024-10-16 07:12:05.512169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.241 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.241 [2024-10-16 07:12:05.512322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.242 [2024-10-16 07:12:05.512328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.242 [2024-10-16 07:12:05.512335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.242 [2024-10-16 07:12:05.514770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3306218 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3306218 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3306218 ']' 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.242 07:12:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.242 [2024-10-16 07:12:05.524076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.242 [2024-10-16 07:12:05.524562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.242 [2024-10-16 07:12:05.524576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.242 [2024-10-16 07:12:05.524583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.242 [2024-10-16 07:12:05.524734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.242 [2024-10-16 07:12:05.524890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.242 [2024-10-16 07:12:05.524896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.242 [2024-10-16 07:12:05.524902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.242 [2024-10-16 07:12:05.527328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.242 [2024-10-16 07:12:05.536782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.242 [2024-10-16 07:12:05.537368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.242 [2024-10-16 07:12:05.537399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.242 [2024-10-16 07:12:05.537408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.242 [2024-10-16 07:12:05.537574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.242 [2024-10-16 07:12:05.537728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.242 [2024-10-16 07:12:05.537734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.242 [2024-10-16 07:12:05.537740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.242 [2024-10-16 07:12:05.540181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.242 [2024-10-16 07:12:05.549490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.242 [2024-10-16 07:12:05.549973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.242 [2024-10-16 07:12:05.549988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.242 [2024-10-16 07:12:05.549995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.242 [2024-10-16 07:12:05.550146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.242 [2024-10-16 07:12:05.550301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.242 [2024-10-16 07:12:05.550307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.242 [2024-10-16 07:12:05.550312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.242 [2024-10-16 07:12:05.552739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.242 [2024-10-16 07:12:05.562193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.242 [2024-10-16 07:12:05.562778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.242 [2024-10-16 07:12:05.562809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.242 [2024-10-16 07:12:05.562819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.242 [2024-10-16 07:12:05.562994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.242 [2024-10-16 07:12:05.563149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.242 [2024-10-16 07:12:05.563155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.242 [2024-10-16 07:12:05.563161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.242 [2024-10-16 07:12:05.565591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.242 [2024-10-16 07:12:05.574904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.242 [2024-10-16 07:12:05.575475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.242 [2024-10-16 07:12:05.575505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.242 [2024-10-16 07:12:05.575514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.242 [2024-10-16 07:12:05.575681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.242 [2024-10-16 07:12:05.575835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.242 [2024-10-16 07:12:05.575841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.242 [2024-10-16 07:12:05.575853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.242 [2024-10-16 07:12:05.578284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.242 [2024-10-16 07:12:05.581747] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:29:06.242 [2024-10-16 07:12:05.581795] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.242 [2024-10-16 07:12:05.587583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.242 [2024-10-16 07:12:05.588049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.242 [2024-10-16 07:12:05.588064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.242 [2024-10-16 07:12:05.588070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.242 [2024-10-16 07:12:05.588222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.242 [2024-10-16 07:12:05.588377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.242 [2024-10-16 07:12:05.588383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.242 [2024-10-16 07:12:05.588389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.242 [2024-10-16 07:12:05.590815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.242 [2024-10-16 07:12:05.600256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.242 [2024-10-16 07:12:05.600748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.242 [2024-10-16 07:12:05.600760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.242 [2024-10-16 07:12:05.600766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.242 [2024-10-16 07:12:05.600921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.242 [2024-10-16 07:12:05.601072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.242 [2024-10-16 07:12:05.601077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.242 [2024-10-16 07:12:05.601082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.242 [2024-10-16 07:12:05.603509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.242 [2024-10-16 07:12:05.613012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.242 [2024-10-16 07:12:05.613579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.243 [2024-10-16 07:12:05.613610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.243 [2024-10-16 07:12:05.613619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.243 [2024-10-16 07:12:05.613785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.243 [2024-10-16 07:12:05.613945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.243 [2024-10-16 07:12:05.613952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.243 [2024-10-16 07:12:05.613958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.243 [2024-10-16 07:12:05.616392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.243 [2024-10-16 07:12:05.625689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.243 [2024-10-16 07:12:05.626200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.243 [2024-10-16 07:12:05.626231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.243 [2024-10-16 07:12:05.626240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.243 [2024-10-16 07:12:05.626409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.243 [2024-10-16 07:12:05.626563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.243 [2024-10-16 07:12:05.626569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.243 [2024-10-16 07:12:05.626575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.243 [2024-10-16 07:12:05.629022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.243 [2024-10-16 07:12:05.638332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.243 [2024-10-16 07:12:05.638929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.243 [2024-10-16 07:12:05.638959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.243 [2024-10-16 07:12:05.638968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.243 [2024-10-16 07:12:05.639137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.243 [2024-10-16 07:12:05.639291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.243 [2024-10-16 07:12:05.639297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.243 [2024-10-16 07:12:05.639302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.243 [2024-10-16 07:12:05.641744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.243 [2024-10-16 07:12:05.651047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.243 [2024-10-16 07:12:05.651623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.243 [2024-10-16 07:12:05.651654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.243 [2024-10-16 07:12:05.651663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.243 [2024-10-16 07:12:05.651829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.243 [2024-10-16 07:12:05.651988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.243 [2024-10-16 07:12:05.651995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.243 [2024-10-16 07:12:05.652000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.243 [2024-10-16 07:12:05.654435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.243 [2024-10-16 07:12:05.663736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.243 [2024-10-16 07:12:05.664363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.243 [2024-10-16 07:12:05.664393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.243 [2024-10-16 07:12:05.664402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.243 [2024-10-16 07:12:05.664568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.243 [2024-10-16 07:12:05.664722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.243 [2024-10-16 07:12:05.664728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.243 [2024-10-16 07:12:05.664733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.243 [2024-10-16 07:12:05.666123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:06.243 [2024-10-16 07:12:05.667181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.243 [2024-10-16 07:12:05.676348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.243 [2024-10-16 07:12:05.676948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.243 [2024-10-16 07:12:05.676983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.243 [2024-10-16 07:12:05.676993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.243 [2024-10-16 07:12:05.677162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.243 [2024-10-16 07:12:05.677316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.243 [2024-10-16 07:12:05.677322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.243 [2024-10-16 07:12:05.677328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.243 [2024-10-16 07:12:05.679769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.243 [2024-10-16 07:12:05.689073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.243 [2024-10-16 07:12:05.689564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.243 [2024-10-16 07:12:05.689578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.243 [2024-10-16 07:12:05.689584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.243 [2024-10-16 07:12:05.689735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.243 [2024-10-16 07:12:05.689890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.243 [2024-10-16 07:12:05.689896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.243 [2024-10-16 07:12:05.689902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.243 [2024-10-16 07:12:05.692330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.243 [2024-10-16 07:12:05.695756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.243 [2024-10-16 07:12:05.695777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.243 [2024-10-16 07:12:05.695784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.243 [2024-10-16 07:12:05.695790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.243 [2024-10-16 07:12:05.695795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.243 [2024-10-16 07:12:05.696876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.243 [2024-10-16 07:12:05.696964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.243 [2024-10-16 07:12:05.697157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.243 [2024-10-16 07:12:05.701764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.243 [2024-10-16 07:12:05.702120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.243 [2024-10-16 07:12:05.702134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.243 [2024-10-16 07:12:05.702140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.243 [2024-10-16 07:12:05.702292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.243 [2024-10-16 07:12:05.702443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.243 [2024-10-16 07:12:05.702448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.243 [2024-10-16 07:12:05.702454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.243 [2024-10-16 07:12:05.704895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.243 [2024-10-16 07:12:05.714494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.243 [2024-10-16 07:12:05.714850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.243 [2024-10-16 07:12:05.714864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.243 [2024-10-16 07:12:05.714870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.243 [2024-10-16 07:12:05.715022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.243 [2024-10-16 07:12:05.715172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.243 [2024-10-16 07:12:05.715179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.243 [2024-10-16 07:12:05.715184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.243 [2024-10-16 07:12:05.717614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.244 [2024-10-16 07:12:05.727207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.244 [2024-10-16 07:12:05.727680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.244 [2024-10-16 07:12:05.727693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.244 [2024-10-16 07:12:05.727699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.244 [2024-10-16 07:12:05.727855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.244 [2024-10-16 07:12:05.728007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.244 [2024-10-16 07:12:05.728013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.244 [2024-10-16 07:12:05.728018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.244 [2024-10-16 07:12:05.730442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.506 [2024-10-16 07:12:05.739906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.506 [2024-10-16 07:12:05.740375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.506 [2024-10-16 07:12:05.740387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.506 [2024-10-16 07:12:05.740393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.506 [2024-10-16 07:12:05.740543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.506 [2024-10-16 07:12:05.740694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.506 [2024-10-16 07:12:05.740700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.506 [2024-10-16 07:12:05.740705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.506 [2024-10-16 07:12:05.743136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.506 [2024-10-16 07:12:05.752582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.506 [2024-10-16 07:12:05.753051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.506 [2024-10-16 07:12:05.753067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.506 [2024-10-16 07:12:05.753073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.506 [2024-10-16 07:12:05.753224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.506 [2024-10-16 07:12:05.753374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.506 [2024-10-16 07:12:05.753380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.506 [2024-10-16 07:12:05.753385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.506 [2024-10-16 07:12:05.755811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.506 [2024-10-16 07:12:05.765258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.506 [2024-10-16 07:12:05.765704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.506 [2024-10-16 07:12:05.765715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.506 [2024-10-16 07:12:05.765721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.506 [2024-10-16 07:12:05.765876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.506 [2024-10-16 07:12:05.766027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.506 [2024-10-16 07:12:05.766033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.506 [2024-10-16 07:12:05.766038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.506 [2024-10-16 07:12:05.768474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.506 [2024-10-16 07:12:05.777923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.506 [2024-10-16 07:12:05.778412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.506 [2024-10-16 07:12:05.778424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.506 [2024-10-16 07:12:05.778429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.506 [2024-10-16 07:12:05.778580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.506 [2024-10-16 07:12:05.778730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.506 [2024-10-16 07:12:05.778735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.507 [2024-10-16 07:12:05.778740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.507 [2024-10-16 07:12:05.781171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.507 [2024-10-16 07:12:05.790612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.507 [2024-10-16 07:12:05.791196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.507 [2024-10-16 07:12:05.791229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.507 [2024-10-16 07:12:05.791238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.507 [2024-10-16 07:12:05.791407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.507 [2024-10-16 07:12:05.791565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.507 [2024-10-16 07:12:05.791571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.507 [2024-10-16 07:12:05.791576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.507 [2024-10-16 07:12:05.794014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.507 [2024-10-16 07:12:05.803311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.507 [2024-10-16 07:12:05.803756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.507 [2024-10-16 07:12:05.803787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.507 [2024-10-16 07:12:05.803796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.507 [2024-10-16 07:12:05.803972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.507 [2024-10-16 07:12:05.804126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.507 [2024-10-16 07:12:05.804133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.507 [2024-10-16 07:12:05.804139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.507 [2024-10-16 07:12:05.806569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.507 [2024-10-16 07:12:05.816016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.507 [2024-10-16 07:12:05.816546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.507 [2024-10-16 07:12:05.816576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.507 [2024-10-16 07:12:05.816586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.507 [2024-10-16 07:12:05.816753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.507 [2024-10-16 07:12:05.816911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.507 [2024-10-16 07:12:05.816917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.507 [2024-10-16 07:12:05.816923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.507 [2024-10-16 07:12:05.819355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.507 4897.67 IOPS, 19.13 MiB/s [2024-10-16T05:12:06.006Z] [2024-10-16 07:12:05.829370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.507 [2024-10-16 07:12:05.829937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.507 [2024-10-16 07:12:05.829968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.507 [2024-10-16 07:12:05.829978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.507 [2024-10-16 07:12:05.830146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.507 [2024-10-16 07:12:05.830300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.507 [2024-10-16 07:12:05.830306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.507 [2024-10-16 07:12:05.830312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.507 [2024-10-16 07:12:05.832753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.507 [2024-10-16 07:12:05.842062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.507 [2024-10-16 07:12:05.842613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.507 [2024-10-16 07:12:05.842644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.507 [2024-10-16 07:12:05.842652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.507 [2024-10-16 07:12:05.842820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.507 [2024-10-16 07:12:05.842980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.507 [2024-10-16 07:12:05.842987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.507 [2024-10-16 07:12:05.842992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.507 [2024-10-16 07:12:05.845423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.507 [2024-10-16 07:12:05.854721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.507 [2024-10-16 07:12:05.855168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.507 [2024-10-16 07:12:05.855183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.507 [2024-10-16 07:12:05.855189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.507 [2024-10-16 07:12:05.855340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.507 [2024-10-16 07:12:05.855490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.507 [2024-10-16 07:12:05.855496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.507 [2024-10-16 07:12:05.855501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.507 [2024-10-16 07:12:05.857930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.507 [2024-10-16 07:12:05.867372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.507 [2024-10-16 07:12:05.867708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.507 [2024-10-16 07:12:05.867720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.507 [2024-10-16 07:12:05.867726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.507 [2024-10-16 07:12:05.867880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.507 [2024-10-16 07:12:05.868031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.507 [2024-10-16 07:12:05.868036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.507 [2024-10-16 07:12:05.868041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.507 [2024-10-16 07:12:05.870464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.507 [2024-10-16 07:12:05.880092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.507 [2024-10-16 07:12:05.880626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.507 [2024-10-16 07:12:05.880657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.507 [2024-10-16 07:12:05.880669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.507 [2024-10-16 07:12:05.880835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.507 [2024-10-16 07:12:05.880994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.507 [2024-10-16 07:12:05.881001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.507 [2024-10-16 07:12:05.881006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.507 [2024-10-16 07:12:05.883437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.507 [2024-10-16 07:12:05.892735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.507 [2024-10-16 07:12:05.893293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.507 [2024-10-16 07:12:05.893324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.507 [2024-10-16 07:12:05.893333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.507 [2024-10-16 07:12:05.893499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.507 [2024-10-16 07:12:05.893653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-10-16 07:12:05.893659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-10-16 07:12:05.893664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-10-16 07:12:05.896102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-10-16 07:12:05.905397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-10-16 07:12:05.905737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-10-16 07:12:05.905751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-10-16 07:12:05.905757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.508 [2024-10-16 07:12:05.905914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.508 [2024-10-16 07:12:05.906064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-10-16 07:12:05.906070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-10-16 07:12:05.906075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-10-16 07:12:05.908500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-10-16 07:12:05.918082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-10-16 07:12:05.918513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-10-16 07:12:05.918525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-10-16 07:12:05.918531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.508 [2024-10-16 07:12:05.918681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.508 [2024-10-16 07:12:05.918831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-10-16 07:12:05.918840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-10-16 07:12:05.918848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-10-16 07:12:05.921276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-10-16 07:12:05.930713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-10-16 07:12:05.930964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-10-16 07:12:05.930976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-10-16 07:12:05.930981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.508 [2024-10-16 07:12:05.931131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.508 [2024-10-16 07:12:05.931282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-10-16 07:12:05.931287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-10-16 07:12:05.931292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-10-16 07:12:05.933715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-10-16 07:12:05.943448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-10-16 07:12:05.943977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-10-16 07:12:05.944007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-10-16 07:12:05.944016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.508 [2024-10-16 07:12:05.944185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.508 [2024-10-16 07:12:05.944338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-10-16 07:12:05.944344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-10-16 07:12:05.944350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-10-16 07:12:05.946787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-10-16 07:12:05.956093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-10-16 07:12:05.956630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-10-16 07:12:05.956661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-10-16 07:12:05.956670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.508 [2024-10-16 07:12:05.956837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.508 [2024-10-16 07:12:05.956998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-10-16 07:12:05.957004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-10-16 07:12:05.957010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-10-16 07:12:05.959441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-10-16 07:12:05.968761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-10-16 07:12:05.969329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-10-16 07:12:05.969360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-10-16 07:12:05.969369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.508 [2024-10-16 07:12:05.969537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.508 [2024-10-16 07:12:05.969691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-10-16 07:12:05.969697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-10-16 07:12:05.969702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-10-16 07:12:05.972141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-10-16 07:12:05.981441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-10-16 07:12:05.981869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-10-16 07:12:05.981884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-10-16 07:12:05.981890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.508 [2024-10-16 07:12:05.982041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.508 [2024-10-16 07:12:05.982192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-10-16 07:12:05.982198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-10-16 07:12:05.982203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-10-16 07:12:05.984634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-10-16 07:12:05.994073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-10-16 07:12:05.994518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-10-16 07:12:05.994530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-10-16 07:12:05.994536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.508 [2024-10-16 07:12:05.994686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.508 [2024-10-16 07:12:05.994836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-10-16 07:12:05.994842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-10-16 07:12:05.994852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-10-16 07:12:05.997276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.772 [2024-10-16 07:12:06.006710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.772 [2024-10-16 07:12:06.007314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.772 [2024-10-16 07:12:06.007328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.772 [2024-10-16 07:12:06.007334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.007489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.007639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.007645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.007649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.010079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.019371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.019819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.019831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.019836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.019990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.020141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.020146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.020151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.022575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.032013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.032593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.032623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.032632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.032799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.032959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.032966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.032971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.035408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.044707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.045273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.045304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.045313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.045479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.045633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.045639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.045648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.048084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.057387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.057891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.057913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.057920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.058076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.058227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.058234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.058240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.060670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.070126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.070584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.070598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.070603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.070754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.070909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.070916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.070921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.073348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.082814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.083365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.083397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.083405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.083572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.083725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.083731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.083737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.086175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.095480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.095914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.095945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.095954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.096123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.096277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.096283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.096288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.098726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.108172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.108626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.108641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.108646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.108798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.108953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.108960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.108965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.111391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.120831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.121280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.121292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.121298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.121448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.121598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.121604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.121609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.124036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.133476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.773 [2024-10-16 07:12:06.134071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.773 [2024-10-16 07:12:06.134102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.773 [2024-10-16 07:12:06.134111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.773 [2024-10-16 07:12:06.134278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.773 [2024-10-16 07:12:06.134435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.773 [2024-10-16 07:12:06.134441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.773 [2024-10-16 07:12:06.134447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.773 [2024-10-16 07:12:06.136889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.773 [2024-10-16 07:12:06.146190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.146640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.146654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.146660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.146811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.146966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.146972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.146977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.149403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.774 [2024-10-16 07:12:06.158846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.159191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.159203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.159209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.159360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.159510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.159518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.159524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.161950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.774 [2024-10-16 07:12:06.171711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.172209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.172223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.172228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.172379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.172530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.172535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.172540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.174972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.774 [2024-10-16 07:12:06.184328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.184776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.184788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.184795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.184949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.185100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.185106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.185111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.187548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.774 [2024-10-16 07:12:06.196993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.197463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.197475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.197480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.197630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.197781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.197787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.197791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.200220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.774 [2024-10-16 07:12:06.209666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.210213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.210244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.210253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.210420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.210574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.210580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.210585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.213026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.774 [2024-10-16 07:12:06.222332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.222808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.222826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.222832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.222988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.223139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.223145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.223150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.225576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.774 [2024-10-16 07:12:06.235033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.235564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.235595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.235604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.235771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.235930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.235937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.235943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.238374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.774 [2024-10-16 07:12:06.247681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.248226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.248257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.248267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.248433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.248586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.248592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.248597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.251035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.774 [2024-10-16 07:12:06.260338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.774 [2024-10-16 07:12:06.260674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.774 [2024-10-16 07:12:06.260689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:06.774 [2024-10-16 07:12:06.260695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:06.774 [2024-10-16 07:12:06.260850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:06.774 [2024-10-16 07:12:06.261008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.774 [2024-10-16 07:12:06.261014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.774 [2024-10-16 07:12:06.261019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.774 [2024-10-16 07:12:06.263449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.037 [2024-10-16 07:12:06.273053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.037 [2024-10-16 07:12:06.273372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.037 [2024-10-16 07:12:06.273386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.037 [2024-10-16 07:12:06.273392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.037 [2024-10-16 07:12:06.273543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.037 [2024-10-16 07:12:06.273693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.037 [2024-10-16 07:12:06.273700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.037 [2024-10-16 07:12:06.273705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.037 [2024-10-16 07:12:06.276135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.037 [2024-10-16 07:12:06.285725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.037 [2024-10-16 07:12:06.286106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.037 [2024-10-16 07:12:06.286118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.037 [2024-10-16 07:12:06.286124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.037 [2024-10-16 07:12:06.286275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.037 [2024-10-16 07:12:06.286426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.037 [2024-10-16 07:12:06.286432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.037 [2024-10-16 07:12:06.286437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.037 [2024-10-16 07:12:06.288887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.037 [2024-10-16 07:12:06.298337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.037 [2024-10-16 07:12:06.298782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.037 [2024-10-16 07:12:06.298794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.037 [2024-10-16 07:12:06.298800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.037 [2024-10-16 07:12:06.298954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.037 [2024-10-16 07:12:06.299106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.037 [2024-10-16 07:12:06.299112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.037 [2024-10-16 07:12:06.299118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.037 [2024-10-16 07:12:06.301544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.037 [2024-10-16 07:12:06.310993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.037 [2024-10-16 07:12:06.311450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.037 [2024-10-16 07:12:06.311462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.037 [2024-10-16 07:12:06.311468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.037 [2024-10-16 07:12:06.311618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.037 [2024-10-16 07:12:06.311770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.037 [2024-10-16 07:12:06.311776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.038 [2024-10-16 07:12:06.311782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.038 [2024-10-16 07:12:06.314215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.038 [2024-10-16 07:12:06.323664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.038 [2024-10-16 07:12:06.324127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.038 [2024-10-16 07:12:06.324140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.038 [2024-10-16 07:12:06.324145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.038 [2024-10-16 07:12:06.324296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.038 [2024-10-16 07:12:06.324447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.038 [2024-10-16 07:12:06.324453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.038 [2024-10-16 07:12:06.324459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.038 [2024-10-16 07:12:06.326890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.038 [2024-10-16 07:12:06.336343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.038 [2024-10-16 07:12:06.336752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.038 [2024-10-16 07:12:06.336764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.038 [2024-10-16 07:12:06.336769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.038 [2024-10-16 07:12:06.336924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.038 [2024-10-16 07:12:06.337075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.038 [2024-10-16 07:12:06.337081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.038 [2024-10-16 07:12:06.337086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.038 [2024-10-16 07:12:06.339511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.038 [2024-10-16 07:12:06.348954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.038 [2024-10-16 07:12:06.349514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.038 [2024-10-16 07:12:06.349545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.038 [2024-10-16 07:12:06.349557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.038 [2024-10-16 07:12:06.349723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.038 [2024-10-16 07:12:06.349884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.038 [2024-10-16 07:12:06.349892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.038 [2024-10-16 07:12:06.349897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.038 [2024-10-16 07:12:06.352327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.038 [2024-10-16 07:12:06.361632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.038 [2024-10-16 07:12:06.362226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.038 [2024-10-16 07:12:06.362257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.038 [2024-10-16 07:12:06.362266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.038 [2024-10-16 07:12:06.362432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.038 [2024-10-16 07:12:06.362586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.038 [2024-10-16 07:12:06.362592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.038 [2024-10-16 07:12:06.362598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.038 [2024-10-16 07:12:06.365036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.038 [2024-10-16 07:12:06.374354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.038 [2024-10-16 07:12:06.374907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.038 [2024-10-16 07:12:06.374938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.038 [2024-10-16 07:12:06.374948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.038 [2024-10-16 07:12:06.375116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.038 [2024-10-16 07:12:06.375270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.038 [2024-10-16 07:12:06.375277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.038 [2024-10-16 07:12:06.375282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.038 [2024-10-16 07:12:06.377721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.038 [2024-10-16 07:12:06.387033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.038 [2024-10-16 07:12:06.387461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.038 [2024-10-16 07:12:06.387476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.038 [2024-10-16 07:12:06.387485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.038 [2024-10-16 07:12:06.387637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.038 [2024-10-16 07:12:06.387787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.038 [2024-10-16 07:12:06.387793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.038 [2024-10-16 07:12:06.387798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.038 [2024-10-16 07:12:06.390233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.038 [2024-10-16 07:12:06.399682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.038 [2024-10-16 07:12:06.400152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.038 [2024-10-16 07:12:06.400165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.038 [2024-10-16 07:12:06.400171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.038 [2024-10-16 07:12:06.400322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.038 [2024-10-16 07:12:06.400473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.038 [2024-10-16 07:12:06.400480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.038 [2024-10-16 07:12:06.400485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.038 [2024-10-16 07:12:06.402914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.038 [2024-10-16 07:12:06.412370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.038 [2024-10-16 07:12:06.412955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.038 [2024-10-16 07:12:06.412987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.038 [2024-10-16 07:12:06.412995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.038 [2024-10-16 07:12:06.413164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.038 [2024-10-16 07:12:06.413318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.038 [2024-10-16 07:12:06.413324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.038 [2024-10-16 07:12:06.413330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.038 [2024-10-16 07:12:06.415767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.038 [2024-10-16 07:12:06.416729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.038 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.038 [2024-10-16 07:12:06.425077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.038 [2024-10-16 07:12:06.425506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.038 [2024-10-16 07:12:06.425537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.038 [2024-10-16 07:12:06.425546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.039 [2024-10-16 07:12:06.425712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.039 [2024-10-16 07:12:06.425872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.039 [2024-10-16 07:12:06.425879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.039 [2024-10-16 07:12:06.425884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.039 [2024-10-16 07:12:06.428318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.039 [2024-10-16 07:12:06.437771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.039 [2024-10-16 07:12:06.438339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.039 [2024-10-16 07:12:06.438370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.039 [2024-10-16 07:12:06.438380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.039 [2024-10-16 07:12:06.438546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.039 [2024-10-16 07:12:06.438700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.039 [2024-10-16 07:12:06.438706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.039 [2024-10-16 07:12:06.438711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.039 [2024-10-16 07:12:06.441149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.039 [2024-10-16 07:12:06.450455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.039 Malloc0 00:29:07.039 [2024-10-16 07:12:06.450943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.039 [2024-10-16 07:12:06.450958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.039 [2024-10-16 07:12:06.450964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.039 [2024-10-16 07:12:06.451115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.039 [2024-10-16 07:12:06.451265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.039 [2024-10-16 07:12:06.451271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.039 [2024-10-16 07:12:06.451276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.039 [2024-10-16 07:12:06.453701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.039 [2024-10-16 07:12:06.463152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.039 [2024-10-16 07:12:06.463575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.039 [2024-10-16 07:12:06.463605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.039 [2024-10-16 07:12:06.463615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.039 [2024-10-16 07:12:06.463782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.039 [2024-10-16 07:12:06.463943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.039 [2024-10-16 07:12:06.463950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.039 [2024-10-16 07:12:06.463956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.039 [2024-10-16 07:12:06.466388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.039 [2024-10-16 07:12:06.475854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.039 [2024-10-16 07:12:06.476455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.039 [2024-10-16 07:12:06.476486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ea0c0 with addr=10.0.0.2, port=4420 00:29:07.039 [2024-10-16 07:12:06.476496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(6) to be set 00:29:07.039 [2024-10-16 07:12:06.476663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea0c0 (9): Bad file descriptor 00:29:07.039 [2024-10-16 07:12:06.476816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.039 [2024-10-16 07:12:06.476824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.039 [2024-10-16 07:12:06.476830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.039 [2024-10-16 07:12:06.479271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.039 [2024-10-16 07:12:06.482577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.039 07:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3304982 00:29:07.039 [2024-10-16 07:12:06.488578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.039 [2024-10-16 07:12:06.515821] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:08.552 4789.00 IOPS, 18.71 MiB/s [2024-10-16T05:12:08.993Z] 5801.25 IOPS, 22.66 MiB/s [2024-10-16T05:12:09.937Z] 6568.89 IOPS, 25.66 MiB/s [2024-10-16T05:12:10.880Z] 7194.00 IOPS, 28.10 MiB/s [2024-10-16T05:12:12.264Z] 7705.36 IOPS, 30.10 MiB/s [2024-10-16T05:12:13.205Z] 8136.17 IOPS, 31.78 MiB/s [2024-10-16T05:12:14.145Z] 8480.54 IOPS, 33.13 MiB/s [2024-10-16T05:12:15.086Z] 8795.71 IOPS, 34.36 MiB/s [2024-10-16T05:12:15.086Z] 9073.60 IOPS, 35.44 MiB/s 00:29:15.587 Latency(us) 00:29:15.587 [2024-10-16T05:12:15.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.587 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:15.587 Verification LBA range: start 0x0 length 0x4000 00:29:15.587 Nvme1n1 : 15.01 9076.61 35.46 13336.66 0.00 5693.06 552.96 15510.19 00:29:15.587 [2024-10-16T05:12:15.086Z] =================================================================================================================== 00:29:15.587 [2024-10-16T05:12:15.086Z] Total : 9076.61 35.46 13336.66 0.00 5693.06 552.96 15510.19 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.587 07:12:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.587 rmmod nvme_tcp 00:29:15.587 rmmod nvme_fabrics 00:29:15.587 rmmod nvme_keyring 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 3306218 ']' 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 3306218 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3306218 ']' 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3306218 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:15.587 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3306218 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3306218' 00:29:15.848 killing process with pid 3306218 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3306218 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3306218 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.848 07:12:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.391 00:29:18.391 real 0m28.207s 00:29:18.391 user 1m3.178s 00:29:18.391 sys 0m7.666s 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.391 ************************************ 00:29:18.391 END TEST nvmf_bdevperf 00:29:18.391 ************************************ 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.391 ************************************ 00:29:18.391 START TEST nvmf_target_disconnect 00:29:18.391 ************************************ 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:18.391 * Looking for test storage... 00:29:18.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:18.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.391 --rc genhtml_branch_coverage=1 00:29:18.391 --rc genhtml_function_coverage=1 00:29:18.391 --rc genhtml_legend=1 00:29:18.391 --rc geninfo_all_blocks=1 00:29:18.391 --rc geninfo_unexecuted_blocks=1 00:29:18.391 00:29:18.391 ' 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:18.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.391 --rc genhtml_branch_coverage=1 00:29:18.391 --rc genhtml_function_coverage=1 00:29:18.391 --rc genhtml_legend=1 00:29:18.391 --rc geninfo_all_blocks=1 00:29:18.391 --rc geninfo_unexecuted_blocks=1 00:29:18.391 00:29:18.391 ' 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:18.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.391 --rc genhtml_branch_coverage=1 00:29:18.391 --rc genhtml_function_coverage=1 00:29:18.391 --rc genhtml_legend=1 00:29:18.391 --rc geninfo_all_blocks=1 00:29:18.391 --rc geninfo_unexecuted_blocks=1 00:29:18.391 00:29:18.391 ' 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:18.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.391 --rc genhtml_branch_coverage=1 00:29:18.391 --rc genhtml_function_coverage=1 00:29:18.391 --rc genhtml_legend=1 00:29:18.391 --rc geninfo_all_blocks=1 00:29:18.391 --rc geninfo_unexecuted_blocks=1 00:29:18.391 00:29:18.391 ' 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.391 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.392 07:12:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:26.541 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.541 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.541 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.541 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.541 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:26.542 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:26.542 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:26.542 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:26.542 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.542 07:12:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:29:26.542 00:29:26.542 --- 10.0.0.2 ping statistics --- 00:29:26.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.542 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:29:26.542 00:29:26.542 --- 10.0.0.1 ping statistics --- 00:29:26.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.542 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:26.542 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:26.543 ************************************ 00:29:26.543 START TEST nvmf_target_disconnect_tc1 00:29:26.543 ************************************ 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.543 [2024-10-16 07:12:25.281919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.543 [2024-10-16 07:12:25.282017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1589ba0 with addr=10.0.0.2, port=4420 00:29:26.543 [2024-10-16 07:12:25.282054] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:26.543 [2024-10-16 07:12:25.282073] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:26.543 [2024-10-16 07:12:25.282082] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:26.543 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:26.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:26.543 Initializing NVMe Controllers 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:26.543 00:29:26.543 real 0m0.130s 00:29:26.543 user 0m0.049s 00:29:26.543 sys 0m0.080s 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:26.543 ************************************ 00:29:26.543 END TEST nvmf_target_disconnect_tc1 00:29:26.543 ************************************ 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:26.543 ************************************ 00:29:26.543 START TEST nvmf_target_disconnect_tc2 00:29:26.543 ************************************ 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3312725 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3312725 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3312725 ']' 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.543 07:12:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.543 [2024-10-16 07:12:25.442865] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:29:26.543 [2024-10-16 07:12:25.442925] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.543 [2024-10-16 07:12:25.531446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.543 [2024-10-16 07:12:25.584512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.543 [2024-10-16 07:12:25.584562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.543 [2024-10-16 07:12:25.584571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.543 [2024-10-16 07:12:25.584579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.543 [2024-10-16 07:12:25.584585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.543 [2024-10-16 07:12:25.586660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:26.543 [2024-10-16 07:12:25.586819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:26.543 [2024-10-16 07:12:25.586893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:26.543 [2024-10-16 07:12:25.586928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:26.805 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.805 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:26.805 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:26.805 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.805 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.067 Malloc0 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.067 [2024-10-16 07:12:26.362444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.067 [2024-10-16 07:12:26.402906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3313001 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:27.067 07:12:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.986 07:12:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3312725 00:29:28.986 07:12:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Read completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 Write completed with error (sct=0, sc=8) 00:29:28.986 starting I/O failed 00:29:28.986 [2024-10-16 07:12:28.441463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.986 [2024-10-16 07:12:28.441947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.441977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.442226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.442240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.442490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.442501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.442709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.442721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.442958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.442970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.443298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.443319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.443646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.443658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.443975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.443987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.444285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.444296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.444406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.444419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.444641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.444653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.444969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.444981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.445320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.445332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.445673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.445685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.446000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.446012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.446242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.446254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.986 qpair failed and we were unable to recover it. 00:29:28.986 [2024-10-16 07:12:28.446376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.986 [2024-10-16 07:12:28.446389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.446714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.446726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.447108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.447120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.447472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.447484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.447684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.447698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.448025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.448037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.448381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.448394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.448731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.448743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.449077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.449090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.449440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.449452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.449773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.449784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.450294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.450306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.450628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.450640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.450961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.450974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.451287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.451299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.451571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.451583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.451956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.451971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.452190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.452202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.452539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.452551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.452910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.452923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.453282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.453294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.453477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.453489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.453605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.453616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.453952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.453963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.454298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.454311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.454623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.454635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.454923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.454935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.455247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.455259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.455565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.455577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.455895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.455908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.456295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.456307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.456644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.456657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.456958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.456969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.457308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.457319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.457621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.457632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.457959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.457970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.458409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.458419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.458720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.458731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.987 [2024-10-16 07:12:28.459104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.987 [2024-10-16 07:12:28.459115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.987 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.459509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.459520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.459717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.459729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.460038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.460050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.460384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.460396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.460730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.460740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.461070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.461084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.461317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.461327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.461557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.461568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.461917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.461929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.462296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.462306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.462634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.462645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.462963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.462974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.463306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.463317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.463633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.463644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.463991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.464003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.464305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.464315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.464673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.464684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.465018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.465031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.465331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.465342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.465676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.465686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.466027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.466040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.466228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.466243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.466581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.466593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.466972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.466986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.467344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.467358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.467697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.467711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.468024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.468037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.468427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.468441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.468764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.468776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.469121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.469137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.469487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.469500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.469798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.469810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.470117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.470131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.470453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.470466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.470778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.470790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.471123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.471136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.988 [2024-10-16 07:12:28.471372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.988 [2024-10-16 07:12:28.471385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.988 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.471685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.471698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.472017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.472030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.472376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.472388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.472742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.472755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.473075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.473088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.473407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.473428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.473776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.473790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.474149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.474164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.474403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.474422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.474729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.474742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.475060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.475073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.475424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.475436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.475666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.475678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.475862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.475875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.476246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.476259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.476612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.476626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.476945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.476958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.477323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.477336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.477518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.477531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.477818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.477830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.478092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.478106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.478394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.478407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.478723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.478736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.479093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.479111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.479433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.479450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.479791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.479808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.480125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.480142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.480520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.480538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.480841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.480880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.481103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.481120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.481442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.481459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.481793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.481809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.482146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.482163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.482496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.482513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.482859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.482876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.483204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.483229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.989 qpair failed and we were unable to recover it. 00:29:28.989 [2024-10-16 07:12:28.483449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.989 [2024-10-16 07:12:28.483466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:28.990 qpair failed and we were unable to recover it. 00:29:29.261 [2024-10-16 07:12:28.483800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.261 [2024-10-16 07:12:28.483819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.261 qpair failed and we were unable to recover it. 00:29:29.261 [2024-10-16 07:12:28.484181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.261 [2024-10-16 07:12:28.484200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.261 qpair failed and we were unable to recover it. 00:29:29.261 [2024-10-16 07:12:28.484538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.261 [2024-10-16 07:12:28.484555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.261 qpair failed and we were unable to recover it. 00:29:29.261 [2024-10-16 07:12:28.484763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.261 [2024-10-16 07:12:28.484782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.261 qpair failed and we were unable to recover it. 00:29:29.261 [2024-10-16 07:12:28.485117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.261 [2024-10-16 07:12:28.485136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.261 qpair failed and we were unable to recover it. 00:29:29.261 [2024-10-16 07:12:28.485472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.261 [2024-10-16 07:12:28.485489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.261 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.485795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.485811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.486147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.486174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.486506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.486522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.486830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.486855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.487183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.487199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.487525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.487542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.487875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.487893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.488193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.488210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.488532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.488549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.488837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.488869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.489177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.489193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.489537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.489559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.489911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.489933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.490170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.490192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.490547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.490568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.490894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.490916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.491265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.491286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.491679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.491703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.491990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.492012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.492339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.492361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.492686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.492717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.493060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.493083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.493413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.493434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.493761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.493782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.493984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.494006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.494359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.494381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.494705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.494727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.495071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.495093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.495434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.495456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.495716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.495736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.496085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.496106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.496458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.496480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.496809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.496830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.497215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.497238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.497566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.497588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.262 [2024-10-16 07:12:28.497917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.262 [2024-10-16 07:12:28.497939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.262 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.498254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.498275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.498602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.498622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.498943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.498964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.499211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.499232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.499558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.499579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.499902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.499923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.500288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.500310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.500636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.500657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.500901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.500923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.501264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.501285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.503537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.503613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.504034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.504068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.504419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.504450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.504888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.504920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.505279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.505309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.505641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.505672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.506055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.506086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.506416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.506452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.506838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.506879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.507239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.507270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.507632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.507660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.508001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.508032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.508392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.508421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.508764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.508793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.509187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.509225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.509602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.509631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.509875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.509905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.510270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.510299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.510650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.510679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.511049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.511079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.511454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.511483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.511838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.511882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.512259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.512288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.512654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.512683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.513025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.263 [2024-10-16 07:12:28.513057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.263 qpair failed and we were unable to recover it. 00:29:29.263 [2024-10-16 07:12:28.513426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.513455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.513818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.513863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.514231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.514260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.514570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.514607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.514977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.515007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.515365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.515393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.515770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.515798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.516120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.516150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.516516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.516546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.516916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.516947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.517329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.517357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.517745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.517774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.518014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.518043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.518433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.518461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.518820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.518861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.519218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.519248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.519630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.519664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.520004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.520034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.520400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.520428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.520784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.520812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.521232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.521262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.521623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.521651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.522000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.522031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.522387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.522416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.522779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.522807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.523080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.523110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.523460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.523489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.523862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.523891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.524139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.524169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.524524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.524553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.524921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.524951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.525321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.525351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.525678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.525707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.526064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.526095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.526468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.526497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.526866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.526896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.264 [2024-10-16 07:12:28.527255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.264 [2024-10-16 07:12:28.527284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.264 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.527662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.527690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.528056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.528085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.528458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.528486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.528810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.528839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.529201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.529231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.529595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.529623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.529995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.530031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.530368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.530398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.530761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.530789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.531239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.531268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.531700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.531728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.532100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.532130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.532468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.532497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.532877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.532908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.533168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.533196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.533554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.533583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.533936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.533968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.534321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.534349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.534727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.534755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.534999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.535029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.535486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.535515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.535910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.535940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.536301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.536329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.536692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.536721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.537080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.537110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.537467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.537496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.537756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.537788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.538169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.538201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.538492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.538521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.538881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.538912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.539303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.539331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.539685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.539714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.540072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.540102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.540440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.540469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.540826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.540868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.265 [2024-10-16 07:12:28.541226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.265 [2024-10-16 07:12:28.541256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.265 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.541626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.541654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.541993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.542023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.542399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.542428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.542753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.542791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.543229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.543267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.543634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.543664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.543985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.544016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.544365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.544396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.544743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.544772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.545139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.545170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.545523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.545552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.545911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.545944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.546193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.546222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.546565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.546596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.546943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.546974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.547313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.547342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.547673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.547703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.548071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.548103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.548434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.548464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.548753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.548781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.549138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.549167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.549515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.549546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.549903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.549933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.550294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.550323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.550653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.550681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.550944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.551003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.551371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.551401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.266 qpair failed and we were unable to recover it. 00:29:29.266 [2024-10-16 07:12:28.551734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.266 [2024-10-16 07:12:28.551763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.552139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.552169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.552516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.552546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.552877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.552908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.553198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.553226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.553571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.553600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.553976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.554007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.554448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.554477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.554836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.554875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.555260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.555289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.555650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.555679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.556051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.556087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.556429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.556459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.556823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.556876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.557208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.557237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.557567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.557597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.557933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.557963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.558321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.558349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.558733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.558761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.559111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.559141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.559502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.559530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.559999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.560029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.560392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.560421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.560864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.560894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.561267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.561296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.561658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.561688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.562026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.562056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.562393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.562423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.562791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.562819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.563194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.563223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.563576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.563606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.563961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.563991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.564367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.564396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.564724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.564753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.565004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.565038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.565405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-10-16 07:12:28.565435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.267 qpair failed and we were unable to recover it. 00:29:29.267 [2024-10-16 07:12:28.565815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.565856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.566244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.566274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.566704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.566741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.567103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.567134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.567478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.567507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.567867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.567898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.568282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.568310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.568669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.568698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.569068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.569098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.569457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.569486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.569863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.569895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.570263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.570291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.570671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.570700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.571053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.571083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.571430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.571458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.571711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.571739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.572102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.572133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.572495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.572523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.572881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.572911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.573264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.573293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.573652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.573680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.574026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.574055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.574411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.574439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.574801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.574830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.575166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.575194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.575556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.575585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.575966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.575996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.576372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.576400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.576762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.576790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.577238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.577270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.577602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.577632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.577973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.578004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.578242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.578273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.578538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-10-16 07:12:28.578567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.268 qpair failed and we were unable to recover it. 00:29:29.268 [2024-10-16 07:12:28.578942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.578972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.579325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.579353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.579696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.579724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.580135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.580165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.580525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.580555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.580944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.580973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.581338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.581367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.581729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.581758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.582111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.582142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.582477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.582508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.582879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.582909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.583244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.583274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.583624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.583652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.583992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.584023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.584399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.584427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.584763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.584793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.585183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.585214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.585601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.585630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.585984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.586014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.586255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.586286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.586665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.586693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.587063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.587093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.587447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.587475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.587815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.587855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.588214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.588243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.588662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.588691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.589069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.589100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.589456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.589484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.589838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.589881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.590117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.590145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.590463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.590492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.590863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.590893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.591241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.591270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.591641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.591670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.269 [2024-10-16 07:12:28.592017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-10-16 07:12:28.592048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.269 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.592418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.592446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.592806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.592841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.593220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.593251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.593627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.593656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.594023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.594053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.594433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.594461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.594822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.594861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.595221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.595249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.595628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.595656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.596024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.596053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.596424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.596452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.596809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.596837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.597226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.597256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.597628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.597657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.598019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.598049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.598426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.598456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.598700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.598732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.599008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.599038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.599299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.599328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.599656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.599685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.600060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.600090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.600466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.600493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.600831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.600871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.601189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.601217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.601577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.601606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.601975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.602006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.602373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.602401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.602667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.602695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.603070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.603105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.603446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.603476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.603839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.603883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.604256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.604285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.604642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.604670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.605017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.605049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.605417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.605446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.270 qpair failed and we were unable to recover it. 00:29:29.270 [2024-10-16 07:12:28.605783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.270 [2024-10-16 07:12:28.605812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.606202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.606232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.606561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.606590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.606961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.606992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.607322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.607353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.607697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.607725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.608065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.608096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.608341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.608373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.608764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.608792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.609037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.609066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.609437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.609466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.609839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.609881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.610277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.610306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.610648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.610677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.610926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.610959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.611308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.611344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.611688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.611717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.612067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.612098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.612459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.612487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.612883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.612914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.613268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.613306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.613533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.613565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.613953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.613983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.614334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.614362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.614723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.614752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.615079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.615110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.615435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.615463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.615837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.615877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.271 [2024-10-16 07:12:28.616127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.271 [2024-10-16 07:12:28.616155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.271 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.616533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.616562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.616934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.616965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.617333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.617362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.617724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.617752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.618120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.618150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.618520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.618551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.618901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.618931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.619285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.619312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.619692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.619720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.619962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.619996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.620411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.620439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.620776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.620804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.621185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.621215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.621553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.621583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.621932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.621962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.622332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.622360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.622729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.622757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.623113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.623143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.623499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.623528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.623892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.623923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.624290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.624319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.624665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.624693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.625063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.625092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.625421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.625449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.625793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.625823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.626170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.626199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.626587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.626616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.626977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.627008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.627351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.627379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.627755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.627784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.628131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.628161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.628517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.628546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.628923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.628954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.629323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.629353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.629738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.272 [2024-10-16 07:12:28.629766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.272 qpair failed and we were unable to recover it. 00:29:29.272 [2024-10-16 07:12:28.630121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.630152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.630496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.630525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.630868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.630898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.631236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.631266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.631635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.631664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.631904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.631934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.632332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.632360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.632712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.632741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.633107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.633136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.633472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.633502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.633858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.633888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.634213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.634242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.634589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.634618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.634983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.635015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.635269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.635298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.635681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.635709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.636146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.636176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.636528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.636557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.636887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.636918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.637280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.637309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.637648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.637677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.638017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.638048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.638412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.638441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.638716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.638745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.639111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.639146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.639487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.639518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.639869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.639900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.640285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.640313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.640665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.640695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.641067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.641097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.641465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.641494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.641866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.641896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.642259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.642290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.273 [2024-10-16 07:12:28.642661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.273 [2024-10-16 07:12:28.642690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.273 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.643023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.643053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.643401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.643430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.643741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.643779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.644140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.644170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.644532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.644562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.645007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.645038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.645410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.645440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.645771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.645803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.646196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.646227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.646587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.646617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.647000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.647029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.647400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.647429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.647792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.647822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.648258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.648287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.648652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.648681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.649025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.649056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.649490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.649519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.649860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.649897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.650281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.650309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.650685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.650713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.651081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.651111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.651453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.651482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.651859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.651889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.652242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.652270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.652644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.652673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.653000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.653031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.653377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.653405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.653765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.653793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.654152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.654182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.654543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.654572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.654963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.654993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.655376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.655407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.655742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.655770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.656137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.656167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.656552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.656582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.656949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.656981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.657230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.657261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.657496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.657526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.274 [2024-10-16 07:12:28.657893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.274 [2024-10-16 07:12:28.657926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.274 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.658299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.658328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.658691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.658720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.659108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.659138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.659382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.659411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.659753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.659782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.660158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.660188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.660541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.660570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.660923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.660955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.661199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.661230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.661589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.661619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.661983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.662015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.662372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.662401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.662758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.662788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.663163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.663194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.663549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.663580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.663976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.664006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.664354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.664383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.664738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.664768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.665123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.665154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.665529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.665559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.665989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.666020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.666378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.666406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.666839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.666879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.667148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.667180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.667437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.667466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.667881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.667912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.668288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.668319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.668695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.668723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.669073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.669104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.669462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.669491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.669865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.669897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.670243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.670272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.670629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.275 [2024-10-16 07:12:28.670659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.275 qpair failed and we were unable to recover it. 00:29:29.275 [2024-10-16 07:12:28.671026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.671057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.671462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.671491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.671869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.671901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.672173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.672206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.672610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.672641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.672993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.673024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.673286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.673317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.673560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.673593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.673943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.673975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.674339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.674369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.674734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.674764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.675129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.675162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.675510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.675540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.675906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.675943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.676284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.676314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.676664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.676693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.677111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.677142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.677489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.677519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.677875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.677906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.678144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.678177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.678542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.678572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.678926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.678957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.679287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.679316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.679669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.679699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.679954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.679986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.680427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.680457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.680827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.680894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.681247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.681277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.681645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.681675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.682056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.276 [2024-10-16 07:12:28.682088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.276 qpair failed and we were unable to recover it. 00:29:29.276 [2024-10-16 07:12:28.682457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.682487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.682837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.682879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.683266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.683296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.683704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.683734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.684105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.684135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.684476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.684506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.684905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.684935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.685362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.685393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.685762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.685792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.686155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.686186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.686539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.686574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.686931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.686964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.687339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.687369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.687736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.687766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.688202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.688233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.688581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.688610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.688950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.688981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.689206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.689234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.689536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.689565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.689936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.689967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.690341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.690369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.690614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.690642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.691000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.691030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.691393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.691421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.691757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.691787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.692153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.692184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.692519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.692549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.692912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.692944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.693334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.693363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.693713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.693742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.694088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.694120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.694469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.694499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.694797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.694826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.695116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.695148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.695484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.695520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.695911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.695943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.277 [2024-10-16 07:12:28.696307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.277 [2024-10-16 07:12:28.696336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.277 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.696688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.696724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.697078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.697110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.697467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.697496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.697829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.697868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.698232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.698262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.698604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.698632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.698990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.699024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.699362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.699391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.699748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.699777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.700118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.700150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.700534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.700564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.700946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.700976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.701328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.701359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.701701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.701731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.702075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.702114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.702352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.702382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.702736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.702769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.703130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.703161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.703520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.703550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.703921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.703953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.704294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.704324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.704671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.704701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.705067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.705098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.705475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.705503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.705873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.705905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.706193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.706223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.706583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.706612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.706984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.707014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.707363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.707395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.707831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.707875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.708274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.708302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.708680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.708709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.708968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.709000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.709375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.709406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.278 qpair failed and we were unable to recover it. 00:29:29.278 [2024-10-16 07:12:28.709771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.278 [2024-10-16 07:12:28.709801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.710203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.710233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.710466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.710497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.710754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.710784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.711183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.711213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.711582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.711613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.711957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.711987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.712405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.712441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.712804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.712856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.713227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.713257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.713608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.713638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.714021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.714053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.714417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.714447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.714780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.714811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.715195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.715226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.715583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.715615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.715874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.715904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.716225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.716254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.716614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.716645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.717017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.717048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.717408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.717437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.717614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.717646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.718007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.718038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.718395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.718424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.718774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.718802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.719195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.719227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.719591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.719623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.719983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.720013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.720373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.720404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.720792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.720823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.721108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.721138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.721503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.721533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.721894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.721927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.722275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.722305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.722654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.722692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.723029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.723061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.723403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.723432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.723785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.723814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.279 qpair failed and we were unable to recover it. 00:29:29.279 [2024-10-16 07:12:28.724174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-10-16 07:12:28.724204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.724560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.724591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.724837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.724880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.725246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.725275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.725630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.725659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.726012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.726045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.726325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.726354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.726701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.726730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.727081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.727111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.727469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.727500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.727762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.727790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.728166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.728198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.728559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.728591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.728945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.728976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.729358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.729388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.729806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.729835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.730226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.730256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.730613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.730643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.731005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.731038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.731373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.731403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.731669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.731697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.732067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.732098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.732469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.732500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.732872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.732914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.733302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.733333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.733693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.733723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.734106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.734136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.734466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.734497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.734757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.734786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.735145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.735178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.735413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.735443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.735775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.735803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.736205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.736236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.736602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.736632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.736983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.737014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.737395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.737424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.737697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.737726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.280 qpair failed and we were unable to recover it. 00:29:29.280 [2024-10-16 07:12:28.738064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-10-16 07:12:28.738096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.738428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.738458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.738807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.738837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.739111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.739141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.739487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.739516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.739768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.739798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.740203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.740234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.740592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.740622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.740981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.741013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.741375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.741406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.741791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.741821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.742244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.742276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.742631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.742662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.743047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.743080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.743459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.743490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.743869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.743901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.744269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.744299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.744654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.744686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.745017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.745049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.745414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.745444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.745803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.745834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.746204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.746233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.746606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.746636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.746985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.747017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.747353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.747383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.747746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.747777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.748134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.748166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.748521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.748550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.281 [2024-10-16 07:12:28.748912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-10-16 07:12:28.748945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.281 qpair failed and we were unable to recover it. 00:29:29.282 [2024-10-16 07:12:28.749302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.282 [2024-10-16 07:12:28.749333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.282 qpair failed and we were unable to recover it. 00:29:29.282 [2024-10-16 07:12:28.749708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.282 [2024-10-16 07:12:28.749739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.282 qpair failed and we were unable to recover it. 00:29:29.282 [2024-10-16 07:12:28.750138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.282 [2024-10-16 07:12:28.750169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.282 qpair failed and we were unable to recover it. 00:29:29.282 [2024-10-16 07:12:28.750516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.282 [2024-10-16 07:12:28.750548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.282 qpair failed and we were unable to recover it. 00:29:29.282 [2024-10-16 07:12:28.750799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.282 [2024-10-16 07:12:28.750829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.282 qpair failed and we were unable to recover it. 00:29:29.282 [2024-10-16 07:12:28.751251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.282 [2024-10-16 07:12:28.751288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.282 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.751615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.751647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.751909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.751940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.752317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.752347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.752722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.752753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.753113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.753143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.753477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.753510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.753878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.753910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.754373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.754403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.754764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.754793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.755213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.755244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.755600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.755630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.756012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.756043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.756397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.756428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.756791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.756822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.757189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.757220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.757586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.757616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.757962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.757994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.758360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.758390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.758740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.758772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.759092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.759132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.759512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.759545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.759902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.759934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.760290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.760322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.760557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.760589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.760948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.760979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.761337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-10-16 07:12:28.761366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-10-16 07:12:28.761737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.761766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.762141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.762173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.762506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.762535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.762964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.762994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.763363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.763393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.763761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.763791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.764221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.764253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.764600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.764629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.764978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.765008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.765383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.765414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.765765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.765795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.766150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.766181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.766519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.766548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.766926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.766958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.767322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.767351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.767788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.767818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.768174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.768203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.768532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.768561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.768915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.768949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.769347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.769377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.769740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.769777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.770113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.770145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.770501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.770532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.770882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.770914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.771283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.771312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.771694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.771723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.772068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.772100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.772434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.772463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.772822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.772864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.773190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.773219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.773583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.773613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.773969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.774001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.774265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.774295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-10-16 07:12:28.774690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-10-16 07:12:28.774721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.775067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.775100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.775492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.775521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.775864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.775894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.776288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.776316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.776722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.776752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.777102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.777132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.777486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.777515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.777892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.777923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.778292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.778323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.778694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.778726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.779102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.779134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.779398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.779427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.779791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.779821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.780211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.780248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.780593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.780622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.780915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.780946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.781303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.781331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.781682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.781712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.782063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.782094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.782454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.782484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.782828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.782872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.783220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.783251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.783693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.783722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.784003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.784035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.784380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.784410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.784792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.784822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.785229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.785257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.785608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.785638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.786008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.786038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.786331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.786359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.786715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.786743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.787073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.787104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.787486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.787514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.787883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.787916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.788270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.788299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.788627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.788658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.789002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.789032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.789393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.789422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.789787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.789816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.790164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-10-16 07:12:28.790192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-10-16 07:12:28.790563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.790591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.790964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.790995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.791370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.791398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.791770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.791798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.792095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.792125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.792479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.792508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.792866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.792896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.793270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.793298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.793650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.793679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.794027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.794057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.794430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.794459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.794832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.794871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.795254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.795282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.795695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.795724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.796088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.796134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.796461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.796490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.796742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.796770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.797127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.797157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.797401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.797429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.797786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.797815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.798225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.798255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.798605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.798635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.798985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.799015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.799397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.799426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.799793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.799821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.800199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.800230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.800589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.800617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.800965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.800995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.801371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.801400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.801751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.801779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.802133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.802164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.802545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.802573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.802761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.802792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.803174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.803204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.803573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.803603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-10-16 07:12:28.803976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-10-16 07:12:28.804007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.804352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.804382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.804730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.804759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.805094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.805126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.805461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.805490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.805855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.805885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.806237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.806272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.806513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.806545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.806916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.806948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.807331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.807359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.807709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.807738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.808099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.808129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.808462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.808491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.808864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.808894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.809235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.809265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.809610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.809638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.809974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.810005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.810434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.810463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.810816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.810854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.811115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.811143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.811490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.811520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.811879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.811911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.812270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.812300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.812540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.812568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.812947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.812977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.813334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.813362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.813727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.813755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.814129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.814158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.814506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.814535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.814897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.814927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.815299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.815327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.815699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.815728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.816074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.816105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.816466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.816501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-10-16 07:12:28.816855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-10-16 07:12:28.816885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.817192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.817220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.817584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.817613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.817971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.818001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.818446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.818476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.818837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.818877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.819224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.819252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.819611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.819641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.819995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.820025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.820384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.820413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.820785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.820813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.821219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.821249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.821606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.821636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.822040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.822070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.822434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.822462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.822829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.822866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.823209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.823238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.823607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.823636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.823972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.824002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.824371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.824399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.824762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.824791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.825159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.825189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.825548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.825577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.826013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.826043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.826417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.826448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.826803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.826831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.827229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.827260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.827636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.827665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.828033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.828064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.828419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.828447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.828819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.828860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.829217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.829245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.829622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.829652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.830005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-10-16 07:12:28.830035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-10-16 07:12:28.830389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.830418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.830767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.830798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.831139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.831168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.831528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.831557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.831924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.831954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.832317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.832345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.832715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.832745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.833112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.833142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.833521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.833549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.833795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.833826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.834200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.834231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.834585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.834614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.834886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.834915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.835272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.835300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.835665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.835694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.836066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.836096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.836420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.836449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.836802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.836832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.837208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.837238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.837607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.837636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.838010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.838040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.838398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.838428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.838684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.838713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.839044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.839074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.839448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.839477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.839868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.839898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.840256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.840285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.840649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.840678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.841032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.841063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.841466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.841495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.841725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.841753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.842106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.842136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.842513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.842541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.842907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.842945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.843316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.843346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.843715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.843743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.844125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.844155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.844512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.844540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.844982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.845013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.845347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.845377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.562 [2024-10-16 07:12:28.845742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-10-16 07:12:28.845771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.846104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.846134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.846503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.846533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.846900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.846929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.847307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.847335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.847674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.847703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.848074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.848104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.848465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.848494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.848869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.848900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.849251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.849280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.849635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.849663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.850041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.850071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.850467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.850496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.850933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.850963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.851321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.851349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.851709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.851737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.852065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.852096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.852507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.852536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.852872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.852905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.853259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.853288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.853665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.853700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.854077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.854106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.854357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.854386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.854804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.854834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.855226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.855255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.855618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.855647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.856014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.856045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.856385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.856414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.856790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.856819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.857220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.857250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.857599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.857628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.857988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.858018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.858348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.858377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.858638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.858666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.859015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.859045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.859416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.859446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.859809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.859837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.860210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.860239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.860608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.860636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.860887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.563 [2024-10-16 07:12:28.860920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-10-16 07:12:28.861242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.861272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.861626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.861654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.862037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.862068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.862431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.862460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.862830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.862867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.863215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.863245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.863602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.863632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.864002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.864039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.864366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.864396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.864728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.864756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.864999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.865031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.865426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.865455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.865831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.865872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.866222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.866252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.866673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.866703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.867057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.867089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.867458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.867486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.867858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.867889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.868231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.868261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.868620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.868648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.868907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.868938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.869328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.869358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.869721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.869750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.870116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.870145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.870408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.870436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.870690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.870718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.871088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.871118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.871475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.871504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.871877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.871908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.872275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.872303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.872664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.872694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.873060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.873091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.873325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.873355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.873693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.873722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.874074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.874106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.874455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.874484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.874817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.874861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.875241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.875270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.875617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.875646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-10-16 07:12:28.876032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.564 [2024-10-16 07:12:28.876063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.876436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.876464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.876819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.876857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.877116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.877149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.877531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.877560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.877946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.877975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.878364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.878393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.878732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.878762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.879144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.879173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.879543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.879578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.879930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.879960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.880322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.880351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.880719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.880748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.881120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.881150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.881503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.881532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.881936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.881965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.882344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.882373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.882741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.882770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.883110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.883140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.883404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.883432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.883678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.883710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.884098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.884129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.884487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.884517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.884773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.884802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.885252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.885283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.885639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.885667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.886020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.886049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.886411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.886441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.886795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.886826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.887198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.887227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.887585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.887613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.887880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.887912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.888297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.888327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.888575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.888603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.888973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.889003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.565 qpair failed and we were unable to recover it. 00:29:29.565 [2024-10-16 07:12:28.889263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.565 [2024-10-16 07:12:28.889291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.889612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.889652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.890023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.890053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.890412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.890441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.890818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.890856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.891266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.891294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.891637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.891665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.892040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.892071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.892429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.892458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.892728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.892756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.893137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.893166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.893536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.893565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.893912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.893943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.894307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.894336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.894712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.894741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.895108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.895138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.895469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.895498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.895867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.895898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.896134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.896162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.896429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.896458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.896800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.896830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.897212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.897241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.897609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.897638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.897998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.898029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.898383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.898412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.898736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.898764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.899169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.899199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.899558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.899587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.899950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.899987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.900337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.900366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.900746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.900774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.901039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.901070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.901406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.901435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.901797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.901827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.902210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.902239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.902600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.902628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.902988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.903018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.903381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.903409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.903785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.903814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.904231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.904262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-10-16 07:12:28.904633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.566 [2024-10-16 07:12:28.904662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.905023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.905053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.905419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.905448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.905817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.905855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.906201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.906229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.906589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.906619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.906884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.906915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.907275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.907303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.907663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.907691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.908068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.908097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.908440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.908469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.908732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.908761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.909115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.909144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.909513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.909542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.909871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.909900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.910251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.910280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.910540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.910569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.910921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.910952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.911325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.911353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.911622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.911651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.911880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.911909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.912277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.912307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.912688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.912718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.913094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.913123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.913455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.913484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.913865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.913895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.914269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.914296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.914662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.914691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.915079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.915109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.915372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.915401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.915752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.915781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.916116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.916146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.916481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.916511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.916870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.916900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.917263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.917291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.917639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.917667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.918043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.918073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.918330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.918361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.918716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.918746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.919096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.919126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.919509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.919538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-10-16 07:12:28.919901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.567 [2024-10-16 07:12:28.919933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.920264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.920293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.920649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.920679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.921010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.921040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.921407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.921435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.921819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.921858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.922201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.922230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.922518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.922548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.922798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.922829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.923102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.923132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.923511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.923540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.923894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.923925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.924329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.924358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.924728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.924758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.925098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.925129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.925475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.925511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.925871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.925902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.926273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.926301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.926668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.926697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.927036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.927066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.927404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.927432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.927809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.927838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.928227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.928258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.928684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.928712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.928981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.929011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.929375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.929403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.929763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.929791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.930233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.930264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.930688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.930717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.931096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.931127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.931493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.931522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.931878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.931908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.932278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.932308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.932681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.932710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.933086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.933116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.933468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.933498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.933859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.933889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.934215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.934244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.934625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.934653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.935025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.935055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.935420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.568 [2024-10-16 07:12:28.935447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-10-16 07:12:28.935819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.935867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.936196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.936231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.936583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.936613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.936978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.937009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.937259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.937290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.937653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.937682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.938042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.938073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.938439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.938467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.938854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.938884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.939251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.939281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.939640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.939670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.939919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.939950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.940315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.940344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.940689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.940717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.941071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.941103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.941341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.941373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.941725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.941754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.942103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.942133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.942489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.942518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.942870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.942900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.943226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.943255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.943624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.943654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.944022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.944054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.944429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.944458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.944837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.944883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.945276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.945307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.945722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.945751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.946000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.946034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.946327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.946357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.946714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.946744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.947111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.947141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.947510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.947541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.947885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.947915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.948298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.948329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.948682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.948711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-10-16 07:12:28.949064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.569 [2024-10-16 07:12:28.949097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.949448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.949477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.949818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.949862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.950255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.950286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.950722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.950753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.951129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.951160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.951521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.951550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.951921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.951954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.952307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.952336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.952706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.952736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.953101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.953132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.953343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.953372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.953645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.953676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.953910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.953941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.954291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.954321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.954687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.954715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.955148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.955178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.955514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.955543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.955902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.955931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.956293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.956322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.956680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.956711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.957095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.957127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.957531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.957562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.957904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.957934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.958350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.958381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.958763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.958792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.959185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.959222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.959581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.959612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.959970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.960003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.960400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.960430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.960780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.960809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.961216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.961248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.961500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.961531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.961898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.961928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.962301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.962338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.962697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.962727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.963072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.963106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.963469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.963499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.963769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.963799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.964262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.964293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.964654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.570 [2024-10-16 07:12:28.964683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.570 qpair failed and we were unable to recover it. 00:29:29.570 [2024-10-16 07:12:28.964941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.964976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.965350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.965382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.965741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.965772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.966113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.966145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.966512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.966542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.966887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.966936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.967307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.967337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.967697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.967728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.968126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.968158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.968500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.968529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.968890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.968923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.969321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.969352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.969718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.969749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.970140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.970173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.970524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.970556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.970911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.970941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.971293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.971324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.971688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.971717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.972093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.972125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.972454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.972484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.972869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.972908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.973264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.973294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.973580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.973611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.973966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.973997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.974362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.974392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.974748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.974779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.975115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.975146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.975505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.975536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.975908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.975940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.976336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.976366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.976727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.976756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.977103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.977134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.977399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.977433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.977812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.977860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.978276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.978306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.978733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.978765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.979131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.979163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.979524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.979555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.979935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.979965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.980344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.980375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.980738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.571 [2024-10-16 07:12:28.980767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.571 qpair failed and we were unable to recover it. 00:29:29.571 [2024-10-16 07:12:28.981101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.981132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.981505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.981534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.981885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.981917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.982292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.982321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.982670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.982700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.982990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.983029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.983391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.983427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.983789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.983820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.984206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.984238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.984504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.984534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.985041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.985072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.985362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.985390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.985744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.985775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.986135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.986166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.986521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.986552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.986920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.986952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.987320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.987351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.987708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.987742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.988143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.988175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.988544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.988574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.988929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.988962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.989325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.989353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.989713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.989743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.990130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.990162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.990496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.990525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.990886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.990917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.991252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.991282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.991543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.991573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.991818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.991874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.992223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.992253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.992615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.992645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.993019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.993051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.993418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.993448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.993802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.993831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.994204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.994235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.994575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.994605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.994971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.995003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.572 [2024-10-16 07:12:28.995353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.572 [2024-10-16 07:12:28.995384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.572 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.995732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.995761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.996138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.996169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.996554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.996583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.996941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.996970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.997337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.997366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.997695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.997724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.998078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.998108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.998474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.998503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.998881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.998912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.999283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.999312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.999548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.999580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:28.999954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:28.999985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.000355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.000383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.000755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.000783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.001142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.001172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.001536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.001565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.001943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.001974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.002335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.002365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.002718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.002747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.003127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.003158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.003509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.003537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.003876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.003906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.004273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.004302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.004534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.004563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.004995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.005025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.005358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.005388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.005732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.005761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.006135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.006165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.006538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.006566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.006938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.006969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.007328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.007357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.007715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.007743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.008159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.008188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.008465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.008493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.573 qpair failed and we were unable to recover it. 00:29:29.573 [2024-10-16 07:12:29.008831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.573 [2024-10-16 07:12:29.008870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.009211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.009242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.009484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.009523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.009774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.009802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.010179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.010209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.010575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.010606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.010968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.010999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.011388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.011418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.011767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.011796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.012161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.012191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.012540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.012569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.012936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.012965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.013323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.013352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.013725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.013756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.014102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.014132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.014510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.014540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.014889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.014920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.015186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.015218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.015581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.015610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.015993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.016023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.016414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.016443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.016798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.016826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.017229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.017258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.017639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.017668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.018026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.018057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.018413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.018441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.018806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.018835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.019209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.019239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.019617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.019645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.020007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.020045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.020392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.020421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.020736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.020764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.021163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.021194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.021538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.021569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.574 [2024-10-16 07:12:29.021960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.574 [2024-10-16 07:12:29.021991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.574 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.022385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.022414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.022660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.022691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.023025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.023056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.023409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.023437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.023806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.023836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.024210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.024239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.024601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.024631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.024989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.025018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.025394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.025422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.025765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.025792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.026016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.026056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.026439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.026466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.026872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.026903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.027279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.027307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.027656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.027683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.028020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.028049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.028370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.028399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.028759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.028787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.029141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.029173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.029531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.029561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.029923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.029954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.030212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.030242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.030498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.030529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.030914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.030946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.031329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.031360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.031700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.031733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.032000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.032032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.032416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.032449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.032822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.032863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.033244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.033275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.033642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.033672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.575 qpair failed and we were unable to recover it. 00:29:29.575 [2024-10-16 07:12:29.034037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.575 [2024-10-16 07:12:29.034069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.034419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.034451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.034807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.034839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.035225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.035256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.035509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.035540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.035870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.035902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.036257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.036287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.036638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.036670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.037033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.037067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.037450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.037482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.037831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.037878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.038254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.038284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.038648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.038678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.039013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.039046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.039400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.039431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.039669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.039699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.040082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.040116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.040441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.040472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.040826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.040873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.041219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.041250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.041611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.041641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.041981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.042012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.042348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.042378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.042724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.042755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.576 [2024-10-16 07:12:29.043096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.576 [2024-10-16 07:12:29.043128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.576 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.043372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.043406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.043790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.043822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.044198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.044229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.044589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.044618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.044886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.044917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.045395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.045425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.045773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.045818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.046188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.046221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.046586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.046617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.046983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.047014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.047371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.047400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.047759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.047788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.048166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.048197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.048417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.048448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.048792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.048822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.049056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.049087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.049464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.049493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.049866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.049897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.050263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.050293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.050653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.050683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.051018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.051050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.051401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.051431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.051790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.051820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.052203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.052233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.052605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.052635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.052977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.053007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.053370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.053400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.053766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.053794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-10-16 07:12:29.054206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-10-16 07:12:29.054237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.054601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.054638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.054973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.055004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.055373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.055404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.055647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.055675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.056067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.056104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.056446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.056476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.056736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.056764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.056990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.057021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.057363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.057392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.057754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.057784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.058144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.058174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.058504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.058534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.058886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.058918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.059286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.059317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.059671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.059700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.060059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.060090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.060439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.060467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.060838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.060879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.061151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.061183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.061568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.061597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.061895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.061925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.062276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.062305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.062674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.062704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.063014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.063045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.063422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.063451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.063806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.063834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.064197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.064226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.064598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.064627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.064947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.064977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.065336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.065364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.065738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.065767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.066048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.066084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.066441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.066469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.066867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.066900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.067270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.067299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.067690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.067719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.068086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.068117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.068483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.068512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.068868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.068898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.069264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-10-16 07:12:29.069292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-10-16 07:12:29.069642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.069674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.070026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.070057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.070396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.070426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.070667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.070696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.070973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.071016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.071383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.071413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.071774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.071803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.072162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.072193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.072548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.072577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.072933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.072965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.073321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.073349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.073720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.073749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.074103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.074133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.074488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.074518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.074926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.074956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.075318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.075347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.075719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.075747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.076128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.076158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.076394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.076423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.076795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.076824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.077075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.077104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.077537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.077566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.077826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.077866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.078220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.078249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.078619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.078648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.079010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.079040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.079415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.079445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.079816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.079856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.080221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.080249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.080617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.080646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.081031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.081061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.081407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.081435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.081788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.081817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.082099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.082129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.082380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.082412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.082799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.082828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.083201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.083231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.083595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.083624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.083986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.084017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.084387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.084416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-10-16 07:12:29.084770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-10-16 07:12:29.084798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.085167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.085198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.085568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.085597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.085836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.085875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.086251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.086282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.086632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.086662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.087022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.087053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.087440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.087469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.087822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.087875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.088270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.088300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.088725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.088754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.089097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.089127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.089500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.089530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.089789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.089817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.090207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.090237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.090560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.090590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.090964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.090994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.091437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.091466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.091814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.091869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.092258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.092294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.092533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.092563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.092819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.092864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.093268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.093298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.093615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.093643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.094001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.094032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.094413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.094445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.094776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.094806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.095193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.095225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.095557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.095589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.095921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.095953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.096223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.096252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.096621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.096650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.097021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.097052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.097425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.097455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.097818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.097865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.098238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.098267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.098695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.098723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.099110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.099141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.099392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.099420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.099794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.099822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.100064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.100097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-10-16 07:12:29.100458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-10-16 07:12:29.100487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.100833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.100871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.101232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.101261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.101623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.101652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.102002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.102032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.102395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.102431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.102752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.102781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.103176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.103207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.103556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.103584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.103963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.103994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.104349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.104378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.104723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.104752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.105090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.105121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.105489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.105518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.105881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.105913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.106261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.106291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.106660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.106690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.107065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.107097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.107354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.107382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.107728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.107757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.108152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.108182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.108536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.108565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.108918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.108947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.109310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.109339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.109695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.109724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.110084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.110114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.110509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.110538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.110956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.110987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.111347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.111376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.111712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.111742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.112101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.112131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.112481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.112511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.112864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.112894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.113266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-10-16 07:12:29.113295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-10-16 07:12:29.113666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.113695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.114080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.114111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.114449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.114478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.114743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.114774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.115107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.115138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.115500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.115530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.115893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.115923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.116243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.116271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.116592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.116621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.116890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.116920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.117270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.117299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.117667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.117697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.118085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.118116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.118462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.118492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.118730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.118762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.119140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.119170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.119511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.119539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.119929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.119958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.120311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.120340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.120647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.120676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.120988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.121018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.121394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.121423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.121784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.121813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.122172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.122203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.122535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.122564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.122919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.122951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.123319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.123349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.123720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.123749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.124012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.124046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.124398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.124429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.124790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.124820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.125199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.125230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.125557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.125587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.125936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.125969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.126331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.126360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.126717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.126746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.127103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.127135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.127481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.127510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.127876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.127907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.128309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.128346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-10-16 07:12:29.128659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-10-16 07:12:29.128690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.128983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.129014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.129351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.129381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.129737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.129767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.130109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.130140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.130491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.130522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.130880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.130911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.131265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.131294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.131665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.131694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.132057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.132088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.132478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.132507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.132875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.132905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.133267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.133299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.133633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.133662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.134027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.134059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.134426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.134454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.134836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.134881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.135147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.135176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.135548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.135576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.135882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.135912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.136170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.136199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.136523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.136553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.136911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.136941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.137296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.137326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.137692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.137721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.138062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.138092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.138452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.138490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.138838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.138877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.139220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.139248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.139631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.139659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.140011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.140044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.140425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.140454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.140674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.140705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.141090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.141120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.141502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.141531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.141783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.141815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.142154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.142185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.142538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.142574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.142947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.142977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.143324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.143361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.143736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-10-16 07:12:29.143765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-10-16 07:12:29.144095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.144125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.144468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.144497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.144763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.144792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.145191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.145220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.145560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.145589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.145957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.145988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.146326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.146354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.146707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.146736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.147095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.147126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.147480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.147508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.147871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.147901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.148240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.148269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.148617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.148654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.148887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.148920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.149290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.149318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.149647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.149675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.150032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.150063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.150441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.150470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.150837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.150878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.151121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.151153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.151510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.151540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.151898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.151930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.152277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.152306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.152679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.152707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.153032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.153062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.153433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.153462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.153721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.153750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.154122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.154151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.154502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.154531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.154987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.155017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.155380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.155408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.155747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.155774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.156140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.156170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.156534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.156563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.156905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.156935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.157194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.157222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.157608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.157636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.158014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.158045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.158394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.158422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.158776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.158805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-10-16 07:12:29.159198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-10-16 07:12:29.159228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.159592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.159621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.159964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.159995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.160342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.160372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.160744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.160772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.161135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.161165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.161530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.161559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.161943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.161973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.162340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.162369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.162741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.162770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.163106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.163136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.163426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.163455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.163824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.163865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.164225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.164255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.164616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.164644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.165019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.165048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.165410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.165439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.165792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.165821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.166196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.166225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.166573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.166602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.166959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.166995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.167337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.167366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.167730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.167760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.168101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.168131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.168472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.168502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.168866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.168898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.169446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.169483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.169750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.169785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.170084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.170115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.170463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.170492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.170915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.170946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.171333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.171362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.171712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.171741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.172079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.172118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.172457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.172486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.172815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.172854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.173185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-10-16 07:12:29.173215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-10-16 07:12:29.173590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.173619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.174007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.174037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.174405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.174434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.174774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.174809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.175208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.175239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.175599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.175628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.176004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.176034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.176395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.176425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.176871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.176901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.177183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.177212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.177561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.177591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.177926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.177956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.178324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.178361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.178698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.178726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.179082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.179112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.179471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.179501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.179876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.179908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.180283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.180312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.180693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.180722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.181085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.181117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.181467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.181495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.181865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.181898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.182277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.182306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.182657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.182687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.182942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.182976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.183330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.183361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.183703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.183731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.184075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.184106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.184474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.184504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.184863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-10-16 07:12:29.184894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-10-16 07:12:29.185257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.185292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.185619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.185649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.186025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.186055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.186314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.186346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.186722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.186750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.187093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.187125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.187465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.187495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.187864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.187895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.188266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.188295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.188649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.188678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.189033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.189063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.189422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.189451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.189777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.189807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.190200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.190231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.190585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.190615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.190872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.190905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.191253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.191284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.191618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.191647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.192014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.192046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.192412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.192441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.192807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.192836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.193257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.193287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.193648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.193680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.193947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.193980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.194356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.194385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.194743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.194775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.195136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.195167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.195420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.195450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.195817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.195860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.196257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.196287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.196641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.196673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.197025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.197056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.197389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.197418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.197776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-10-16 07:12:29.197806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-10-16 07:12:29.198205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.198238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.198595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.198625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.199006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.199037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.199389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.199419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.199766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.199796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.200206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.200237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.200601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.200632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.201024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.201056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.201422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.201455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.201675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.201707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.202041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.202072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.202466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.202495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.202868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.202899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.203264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.203294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.203667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.203697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.204072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.204104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.204460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.204490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.204869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.204900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.205239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.205267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.205632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.205662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.206019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.206058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.206420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.206450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.206812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.206869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.207250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.207280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.207621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.207651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.208012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.208045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.208378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.208409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.208762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.208794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.209186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.209217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.209583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.209613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.209992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.210022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.210386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.210416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.210775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.210804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.211225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.211257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.211609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.211644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.212013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.212044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-10-16 07:12:29.212399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-10-16 07:12:29.212436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.212797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.212827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.213178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.213210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.213633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.213663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.214145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.214178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.214552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.214582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.214951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.214984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.215348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.215379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.215737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.215768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.216105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.216145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.216509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.216540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.216901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.216934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.217199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.217230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.217617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.217649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.217989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.218021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.218388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.218421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.218653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.218684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.218978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.219009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.219351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.219381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.219743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.219772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.220120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.220154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.220502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.220532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.220906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.220937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.221285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.221316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.221677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.221707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.222045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.222082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.222450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.222481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.222833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.222879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.223215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.223244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.223474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.223505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.223858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.223890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.224251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.224281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.224633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.224665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.225030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.225061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.225299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.225327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.225677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.225707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.226047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.226077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-10-16 07:12:29.226454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-10-16 07:12:29.226483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.226810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.226839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.227215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.227247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.227603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.227633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.227994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.228025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.228346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.228375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.228740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.228771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.229132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.229165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.229518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.229548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.229925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.229956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.230314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.230345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.230705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.230737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.231066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.231098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.231522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.231552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.231917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.231947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.232303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.232340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.232579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.232608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.232877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.232909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.233268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.233297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.233665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.233695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.234055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.234096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.234319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.234350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.234815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.234858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.235134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.235164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.235507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.235538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.235865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.235896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.236331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.236360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.236600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.236632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.237003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.237035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.237371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.237401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.237753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.237782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.238143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.238175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.238537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.238568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.238952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-10-16 07:12:29.238983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-10-16 07:12:29.239337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.239368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.239734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.239763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.240128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.240159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.240490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.240521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.240875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.240906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.241289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.241321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.241671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.241700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.241972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.242002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.242265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.242295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.242631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.242660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.243022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.243055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.243401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.243430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.243792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.243821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.244202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.244234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.244835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.244881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.245233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.245263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.245569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.245598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.245959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.245990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.246338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.246369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.246724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.246753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.247129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.247160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.247524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.247554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.247919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.247952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.248339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.248367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.248801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.248832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.249058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.249088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.249452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.249481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.249884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.249917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.250142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.250170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.250544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.250572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.251009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.251039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.251296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.251325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.251686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.251715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.252064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.252095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.252460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-10-16 07:12:29.252489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-10-16 07:12:29.252868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.252898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.253265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.253293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.253658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.253687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.254061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.254092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.254467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.254497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.254871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.254901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.255254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.255283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.255643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.255673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.256026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.256056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.256431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.256461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.256709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.256741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.257091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.257122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.257469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.257497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.257865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.257897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.258247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.258282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.258645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.258674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.259021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.259051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.259417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.259446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.259701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.259730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.260111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.260141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.260509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.260538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.260902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.260931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.261297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.261326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.261676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.261706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.262071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.262102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.262473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.262502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.262874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.262905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.263238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.263269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.263602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.263631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.264004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.264035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.264393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.264422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.264793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.264822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.265187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.265216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.265562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.265591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.265927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.265957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.266366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.266394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.266755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.266783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.267156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.867 [2024-10-16 07:12:29.267188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.867 qpair failed and we were unable to recover it. 00:29:29.867 [2024-10-16 07:12:29.267551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.267580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.267786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.267814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.268160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.268190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.268571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.268606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.268962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.268992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.269375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.269403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.269739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.269768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.270117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.270148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.270504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.270533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.270951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.270981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.271327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.271357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.271702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.271731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.272061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.272090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.272437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.272465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.272862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.272893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.273147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.273180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.273539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.273568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.273932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.273963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.274229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.274257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.274615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.274643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.275004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.275038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.275393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.275422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.275802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.275831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.276257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.276287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.276647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.276674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.277042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.277072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.277402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.277432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.277832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.277872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.278250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.278280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.278613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.278643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.278997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.279026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.279278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.279309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.279673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.279702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.280076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.280106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.280513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.280541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.280795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.280823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.281220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.281250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.281489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.281521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.281873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.868 [2024-10-16 07:12:29.281904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.868 qpair failed and we were unable to recover it. 00:29:29.868 [2024-10-16 07:12:29.282220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.282249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.282646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.282674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.283050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.283080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.283458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.283486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.283825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.283863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.284103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.284132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.284493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.284522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.284900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.284931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.285279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.285307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.285708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.285738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.286076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.286107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.286439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.286469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.286833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.286875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.287225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.287254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.287621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.287650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.288027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.288057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.288414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.288443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.288798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.288827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.289205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.289234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.289489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.289518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.289885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.289916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.290290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.290319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.290675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.290703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.291070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.291100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.291497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.291524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.291884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.291914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.292277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.292305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.292595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.292623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.292874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.292907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.293295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.293322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.293682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.293709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.294095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.294124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.294472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.294507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.294866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.294896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.295216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.295245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.295618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.295647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.296010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.869 [2024-10-16 07:12:29.296042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.869 qpair failed and we were unable to recover it. 00:29:29.869 [2024-10-16 07:12:29.296408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.296439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.296809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.296840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.297245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.297276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.297626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.297657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.298023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.298055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.298419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.298449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.298771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.298803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.299198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.299229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.299593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.299625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.299866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.299898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.300261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.300292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.300644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.300674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.301024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.301056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.301424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.301454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.301822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.301872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.302242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.302273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.302558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.302589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.302962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.302996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.303258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.303289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.303650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.303681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.304027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.304058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.304402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.304432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.304801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.304839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.305231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.305264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.305630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.305663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.306061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.306092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.306467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.306499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.870 [2024-10-16 07:12:29.306856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.870 [2024-10-16 07:12:29.306887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.870 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.307165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.307195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.307547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.307579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.307942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.307974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.308229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.308259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.308623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.308655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.308904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.308938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.309294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.309326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.309670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.309700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.310041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.310073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.310426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.310458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.310815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.310861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.311203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.311235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.311592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.311622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.311984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.312016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.312386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.312416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.312771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.312803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.313164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.313194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.313458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.313488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.313719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.313749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.314101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.314133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.314480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.314510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.314866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.314904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.315296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.315327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.315711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.315740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.316082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.316114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.316477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.316508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.316753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.316786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.317151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.317182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.317548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.317580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.317948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.317980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.318349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.318380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.318733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.318764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.319106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.319135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.319510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.319540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.319879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.319908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.320294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.320323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.320683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.320712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.321082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.321111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.321399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.871 [2024-10-16 07:12:29.321427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.871 qpair failed and we were unable to recover it. 00:29:29.871 [2024-10-16 07:12:29.321685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.321716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.322085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.322116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.322462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.322491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.322874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.322904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.323248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.323283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.323637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.323667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.324041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.324070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.324422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.324452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.324825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.324865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.325250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.325279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.325647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.325677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.326059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.326090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.326347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.326376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.326735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.326764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.327145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.327175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.327550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.327578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.327925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.327955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.328326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.328357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.328708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.328736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.329060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.329090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.329451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.329481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.329856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.329886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.330281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.330310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.330590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.330620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.330759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.330791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.331190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.331221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.331586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.331616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.331997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.332028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.332384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.332414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.332804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.332833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.333248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.333277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.333535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.333563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.333933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.333963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.334337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.334367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.334614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.334646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.334986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.335015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.872 qpair failed and we were unable to recover it. 00:29:29.872 [2024-10-16 07:12:29.335355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.872 [2024-10-16 07:12:29.335384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-10-16 07:12:29.335739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-10-16 07:12:29.335768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-10-16 07:12:29.336146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-10-16 07:12:29.336176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-10-16 07:12:29.336503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-10-16 07:12:29.336532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-10-16 07:12:29.336898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-10-16 07:12:29.336929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-10-16 07:12:29.337294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-10-16 07:12:29.337323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-10-16 07:12:29.337694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-10-16 07:12:29.337723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:29.873 [2024-10-16 07:12:29.338105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.873 [2024-10-16 07:12:29.338135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:29.873 qpair failed and we were unable to recover it. 00:29:30.146 [2024-10-16 07:12:29.338482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-10-16 07:12:29.338513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-10-16 07:12:29.338752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-10-16 07:12:29.338785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-10-16 07:12:29.339196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-10-16 07:12:29.339227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-10-16 07:12:29.339555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-10-16 07:12:29.339584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-10-16 07:12:29.339966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-10-16 07:12:29.339996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-10-16 07:12:29.340335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-10-16 07:12:29.340363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-10-16 07:12:29.340695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-10-16 07:12:29.340731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-10-16 07:12:29.341065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-10-16 07:12:29.341097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-10-16 07:12:29.341469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.341497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.341871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.341903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.342257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.342285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.342590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.342618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.342977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.343007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.343354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.343384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.343721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.343750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.344118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.344150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.344501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.344531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.344899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.344931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.345314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.345342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.345708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.345737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.346181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.346212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.346559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.346590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.346930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.346960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.347234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.347262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.347603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.347631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.347984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.348016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.348384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.348413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.348796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.348825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.349247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.349277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.349646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.349677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.350043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.350073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.350444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.350473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.350745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.350774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.350991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.351027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.351354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.351384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.351778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.351807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.352201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.352232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.352590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.352619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.352992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.353022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.353390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-10-16 07:12:29.353419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-10-16 07:12:29.353779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.353808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.354187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.354218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.354470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.354499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.354833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.354871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.355219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.355247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.355618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.355647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.356007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.356036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.356340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.356370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.356727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.356756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.357007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.357041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.357424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.357454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.357814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.357854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.358223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.358252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.358611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.358641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.358910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.358941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.359304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.359333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.359764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.359793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.360136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.360167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.360535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.360564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.360930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.360962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.361223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.361252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.361600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.361631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.362014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.362045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.362374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.362404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.362753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.362782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.363142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.363172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.363527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.363556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.363919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.363950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.364316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.364344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.364712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.364741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.365103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.365134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.365506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.365534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.365898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.365929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.366270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.366300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.366669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.366699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.366962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.366992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.367375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.367406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-10-16 07:12:29.367768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-10-16 07:12:29.367798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.368159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.368190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.368515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.368544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.368903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.368934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.369319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.369349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.369722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.369751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.370084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.370115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.370473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.370502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.370868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.370899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.371296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.371326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.371701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.371730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.372087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.372117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.372458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.372487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.372866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.372897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.373271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.373299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.373662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.373691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.374065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.374096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.374526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.374555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.374907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.374944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.375110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.375143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.375521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.375551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.375893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.375924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.376312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.376342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.376595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.376625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.376953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.376990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.377374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.377403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.377769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.377808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.378203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.378233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.378473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.378505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.378888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.378919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.379300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.379329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.379691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.379720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.380088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.380119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.380485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.380515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.380903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.380933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.381315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.381343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.381706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.381735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.382075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-10-16 07:12:29.382107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-10-16 07:12:29.382442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.382471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.382814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.382856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.383249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.383280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.383625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.383655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.383985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.384016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.384379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.384410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.384778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.384807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.385090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.385120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.385487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.385517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.385870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.385900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.386260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.386290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.386655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.386684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.386937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.386967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.387353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.387388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.387743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.387773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.388136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.388167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.388501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.388531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.388765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.388794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.389195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.389225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.389578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.389608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.389950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.389980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.390351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.390380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.390742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.390770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.391134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.391163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.391500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.391532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.391865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.391896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.392301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.392330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.392693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.392724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.393071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.393102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.393445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.393473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.393827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.393881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.394253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.394282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.394519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.394550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.394898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.394928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.395205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.395233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.395615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.395643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.396024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-10-16 07:12:29.396055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-10-16 07:12:29.396413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.396442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.396796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.396827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.397205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.397234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.397564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.397600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.397947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.397979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.398342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.398372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.398727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.398756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.399087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.399120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.399455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.399484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.399840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.399880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.400219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.400247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.400584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.400613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.400962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.400992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.401367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.401396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.401830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.401884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.402273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.402301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.402752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.402781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.403060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.403091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.403434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.403463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.403832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.403874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.404311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.404340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.404709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.404738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.404994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.405028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.405415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.405444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.405873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.405903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.406279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.406310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.406667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.406696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.407057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.407089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.407460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.407491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.407877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.407909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.408281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.408313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.408681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-10-16 07:12:29.408712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-10-16 07:12:29.409068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.409099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.409452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.409481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.409779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.409809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.410204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.410234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.410596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.410626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.410984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.411014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.411395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.411428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.411802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.411831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.412204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.412235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.412603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.412633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.412996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.413030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.413405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.413435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.413784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.413815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.414193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.414224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.414583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.414614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.414955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.414986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.415362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.415391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.415759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.415788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.416153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.416183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.416532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.416560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.416947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.416978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.417226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.417258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.417604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.417634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.417956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.417986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.418238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.418267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.418504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.418534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.418889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.418932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.419188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.419217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.419567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.419596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.419945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.419975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.420314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.420344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.420699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.420728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.421149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.421179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-10-16 07:12:29.421532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-10-16 07:12:29.421562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.421921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.421951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.422289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.422319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.422663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.422692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.423078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.423109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.423471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.423502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.423855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.423892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.424253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.424283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.424643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.424674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.425046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.425076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.425459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.425488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.425861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.425892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.426169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.426198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.426568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.426597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.426971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.427002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.427353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.427382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.427775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.427805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.428222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.428253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.428640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.428670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.429023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.429054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.429415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.429444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.429788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.429817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.430006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.430035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.430406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.430436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.430793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.430822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.431226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.431256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.431631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.431662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.432013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.432044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.432406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.432435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.432802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.432830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.433215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.433244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.433608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.433637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.433993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.434024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.434381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.434422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.434751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.434780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.435147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.435177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.435538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-10-16 07:12:29.435567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-10-16 07:12:29.435926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.435956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.436322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.436351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.436708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.436737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.437097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.437128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.437484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.437514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.437866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.437897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.438256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.438285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.438686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.438716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.439090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.439122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.439458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.439488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.439887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.439919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.440275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.440304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.440668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.440699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.441080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.441111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.441474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.441503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.441864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.441894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.442251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.442280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.442657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.442687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.443029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.443062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.443476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.443509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.443865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.443898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.444241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.444274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.444537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.444571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.444975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.445010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.445415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.445446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.445768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.445802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.446182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.446214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.446577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.446609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.447027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.447063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.447453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.447482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.447834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.447876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.448240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.448270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.448621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.448650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.449029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.449060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.449315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.449343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.449767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.449799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-10-16 07:12:29.450157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-10-16 07:12:29.450187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.450544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.450575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.450835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.450880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.451264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.451294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.451667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.451697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.452196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.452230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.452567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.452596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.452928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.452959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.453227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.453257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.453628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.453658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.454033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.454067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.454322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.454355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.454738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.454768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.455106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.455137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.455505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.455534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.455903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.455936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.456309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.456340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.456681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.456710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.457110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.457142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.457543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.457571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.457942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.457975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.458327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.458356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.458719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.458749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.459138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.459169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.459549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.459579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.459925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.459957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.460336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.460367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.460713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.460742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.461098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.461134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.461466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.461496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.461858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.461889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.462250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.462281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.462661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.462690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.463035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.463064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.463439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.463469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.463857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.463891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.464184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.464214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.464614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.464645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-10-16 07:12:29.465000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-10-16 07:12:29.465032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.465317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.465346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.465646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.465676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.465984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.466015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.466292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.466321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.466692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.466723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.467029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.467061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.467432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.467462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.467825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.467869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.468250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.468279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.468644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.468674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.468934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.468966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.469412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.469442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.469680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.469713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.470071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.470103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.470467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.470500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.470929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.470962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.471328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.471364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.471740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.471769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.472170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.472201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.472551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.472580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.473010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.473042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.473295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.473324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.473569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.473599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.473914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.473945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.474185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.474216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.474458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.474488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.474730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.474758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.475131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.475162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.475407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.475438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.475720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.475748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.476107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.476137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-10-16 07:12:29.476510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-10-16 07:12:29.476540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.476888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.476920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.477291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.477321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.477697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.477726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.478156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.478186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.478536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.478565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.478921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.478952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.479325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.479357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.479722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.479752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.480147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.480178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.480517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.480549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.480926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.480958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.481387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.481424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.481784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.481815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.482226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.482257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.482634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.482663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.483021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.483055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.483418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.483448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.483812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.483841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.484242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.484275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.484673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.484704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.485144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.485176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.485526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.485558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.485863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.485894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.486162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.486193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.486565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.486594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.486868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.486902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.487322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.487354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.487698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.487728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.488132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.488164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.488517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.488548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.488924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.488954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.489331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.489362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.489768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.489797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.490246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.490278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.490521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.490550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.490955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.490985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-10-16 07:12:29.491322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-10-16 07:12:29.491351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.491710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.491739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.492103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.492132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.492505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.492535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.492890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.492921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.493300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.493328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.493702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.493733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.494084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.494113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.494453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.494482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.494857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.494888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.495277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.495306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.495655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.495687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.495952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.495982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.496219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.496248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.496627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.496656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.496998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.497030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.497389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.497424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.497790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.497821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.498218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.498248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.498611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.498640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.498990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.499020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.499390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.499418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.499663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.499694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.499965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.499996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.500367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.500396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.500781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.500810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.501211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.501244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.501605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.501634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.501865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.501896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.502274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.502303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.502691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.502719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.503094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.503126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.503490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.503519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.503880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.503911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.504292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-10-16 07:12:29.504324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-10-16 07:12:29.504672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.504703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.505066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.505097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.505446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.505476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.505863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.505894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.506315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.506345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.506722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.506751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.507097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.507128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.507505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.507533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.507943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.507980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.508347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.508378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.508550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.508579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.508964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.508994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.509350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.509379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.509745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.509776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.510141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.510172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.510508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.510538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.510905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.510935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.511316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.511347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.511594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.511627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.511993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.512025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.512409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.512438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.512824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.512864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.513208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.513237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.513580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.513610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.513969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.514000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.514368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.514398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.514809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.514838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.515202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.515233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.515591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.515620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.515966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.516000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.516241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.516271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.516623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.516653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.517032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.517063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-10-16 07:12:29.517321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-10-16 07:12:29.517351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.517736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.517765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.518190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.518227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.518609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.518639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.519009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.519040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.519471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.519501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.519889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.519920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.520280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.520308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.520668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.520699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.521167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.521197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.521539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.521577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.521909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.521939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.522201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.522231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.522601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.522631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.522905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.522935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.523294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.523323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.523574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.523604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.523980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.524011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.524394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.524423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.524773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.524803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.524993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.525024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.525394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.525423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.525799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.525829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.526177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.526209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.526467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.526496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.526773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.526802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.527262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.527292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.527655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.527687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.528088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.528120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.528413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.528443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.528786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.528815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.528960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.528989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.529221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.529250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.529587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.529617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.529896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.529928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.530338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.530368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.530744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.530774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-10-16 07:12:29.531034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-10-16 07:12:29.531065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.531411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.531442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.531694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.531724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.532079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.532110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.532453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.532484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.532881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.532911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.533345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.533375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.533752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.533791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.534143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.534174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.534516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.534546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.534797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.534826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.535227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.535258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.535403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.535434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.535828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.535871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.536264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.536293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.536662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.536693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.537042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.537073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.537456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.537485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.537883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.537913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.538269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.538299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.538563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.538592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.539008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.539037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.539450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.539479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.539741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.539770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.540134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.540164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-10-16 07:12:29.540547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-10-16 07:12:29.540575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.540829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.540873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.541319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.541347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.541704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.541732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.542037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.542066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.542496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.542524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.542772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.542801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.543166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.543194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.543539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.543576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.543923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.543952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.544401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.544431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.544793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.544822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.545240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.545270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.545636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.545664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.546031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.546060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.546439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.546466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.546715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.546743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.547110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.547142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.547530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.547559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.547983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.548013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.548257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.548285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.548644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.548672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.549111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.549141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.549513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.549544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.549840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.549887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.550272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.550303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.550660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.550689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.551077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.551108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.551461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.551491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.551859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.551891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.552250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.552280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.552626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.552655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.553023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.553056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.553354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.553382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.553748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.553779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.554163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.554200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.554537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.554567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.554809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.554842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-10-16 07:12:29.555225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-10-16 07:12:29.555256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.555620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.555650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.555919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.555949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.556343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.556373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.556753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.556781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.557131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.557161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.557495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.557523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.557882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.557911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.558273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.558305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.558558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.558588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.558928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.558958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.559368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.559398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.559653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.559683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.560074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.560103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.560432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.560461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.560822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.560865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.561119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.561150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.561510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.561539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.561863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.561896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.562257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.562287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.562673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.562703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.563000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.563030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.563405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.563435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.563870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.563901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.564284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.564319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.564679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.564707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.565034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.565065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.565425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.565453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.565650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.565678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.565942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.565970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.566368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.566397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.566753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.566781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.567153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.567183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.567522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.567550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.567791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.567819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-10-16 07:12:29.568091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-10-16 07:12:29.568120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.568493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.568521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.568936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.568966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.569372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.569401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.569659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.569688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.570033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.570063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.570457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.570485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.570879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.570909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.571269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.571298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.571662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.571690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.572075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.572104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.572487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.572515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.572878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.572909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.573276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.573303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.573678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.573705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.574146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.574176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.574546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.574573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.574934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.574964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.575365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.575393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.575779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.575807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.576225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.576254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.576627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.576655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.576899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.576929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.577252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.577280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.577669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.577697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.578045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.578074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.578460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.578488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.578884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.578913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.579325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.579354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.579540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.579569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.579930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.579965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.580354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.580382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.580636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.580664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.580902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-10-16 07:12:29.580935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-10-16 07:12:29.581302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.581330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.581549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.581580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.581912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.581942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.582325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.582354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.582607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.582635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.583010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.583044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.583290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.583318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.583571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.583600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.583989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.584018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.584382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.584411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.584766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.584793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.585201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.585231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.585604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.585631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.585903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.585932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.586192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.586220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.586583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.586612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.586997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.587027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.587401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.587429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.587866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.587896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.588283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.588311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.588667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.588697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.589076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.589106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.165 qpair failed and we were unable to recover it. 00:29:30.165 [2024-10-16 07:12:29.589460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.165 [2024-10-16 07:12:29.589488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-10-16 07:12:29.825491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-10-16 07:12:29.825630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-10-16 07:12:29.826187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-10-16 07:12:29.826293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.826709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.826746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.827224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.827334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.827778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.827817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.828247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.828282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.828548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.828586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.829140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.829251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.829612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.829655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.830062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.830097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.830510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.830541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.830932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.830967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.831346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.831377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.831736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.831766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.832164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.832197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.832568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.832601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.832992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.833025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.833285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.833317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.833707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.833737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.834008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.834039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.834415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.834445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.834801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.834833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.835132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.835164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.835514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.835546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.835929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.835963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.836350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.836380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.836759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.836790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.837033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.837072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.837428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.837460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.837835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.837878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.838267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.838297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.838787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.838818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.839180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.839212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.839569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.839599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.839938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.839970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.840349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.840378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.840741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.840771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.841169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.841203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.841456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-10-16 07:12:29.841485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-10-16 07:12:29.841812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.841842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.842294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.842325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.842696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.842727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.843000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.843032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.843276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.843315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.843656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.843687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.844080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.844113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.844493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.844524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.844899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.844930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.845289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.845319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.845663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.845694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.845949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.845984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.846363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.846392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.846757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.846786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.847140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.847170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.847510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.847538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.847940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.847971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.848343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.848372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.848628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.848659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.848860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.848889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.849263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.849292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.849655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.849685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.849947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.849977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.850301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.850330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.850711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.850741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.851154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.851186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.851435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.851463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.851804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.851832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.852198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.852229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.852635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.852666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.853049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.853081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.853457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.853486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.853833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.853871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.854228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.854257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.854624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.854653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-10-16 07:12:29.855014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-10-16 07:12:29.855044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.855414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.855443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.855699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.855731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.856083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.856114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.856476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.856506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.856867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.856898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.857359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.857388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.857794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.857823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.858231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.858261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.858613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.858641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.859110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.859141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.859477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.859506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.859865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.859897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.860266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.860296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.860665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.860694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.861136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.861167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.861503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.861532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.861828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.861865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.862154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.862183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.862558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.862587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.862977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.863008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.863240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.863278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.863621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.863650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.864058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.864088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.864442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.864472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.864763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.864792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.865163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.865193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.865545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.865575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.865959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.865988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.866363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.866392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.866760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.866790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.867206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.867236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.867582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.867613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.867875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.867905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.868276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.868306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.868663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.868693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.868968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.868997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.869405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.869434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.869806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.869835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-10-16 07:12:29.870252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-10-16 07:12:29.870281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.870655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.870685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.871104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.871133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.871486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.871515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.871884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.871917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.872224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.872255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.872608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.872638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.872893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.872924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.873290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.873321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.873676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.873712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.874059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.874091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.874453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.874482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.874710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.874739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.875030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.875060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.875299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.875329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.875729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.875760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.876196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.876228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.876627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.876655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.877024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.877055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.877316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.877345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.877703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.877733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.878106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.878135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.878376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.878408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.878773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.878803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.879165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.879196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.879564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.879594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.879923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.879954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.880319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.880348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.880718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.880747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.881008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.881041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.881415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.881445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.881786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.881815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.882098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.882128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.882530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.882559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.882943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.882974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.883350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.883381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.883698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.883735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.884148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-10-16 07:12:29.884179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-10-16 07:12:29.884432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.884462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.884864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.884898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.885257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.885288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.885648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.885679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.886026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.886056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.886406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.886436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.886798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.886827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.887241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.887271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.887614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.887645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.887993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.888026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.888391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.888419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.888777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.888808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.889204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.889236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.889582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.889612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.889893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.889924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.890238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.890269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.890630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.890658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.890916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.890946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.891330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.891360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.891699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.891729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.892150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.892182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.892512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.892543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.892750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.892782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.893097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.893127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.893510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.893541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.893792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.893822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.894182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.894213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.894478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.894507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.894885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-10-16 07:12:29.894915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-10-16 07:12:29.895271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.895302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.895555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.895585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.895926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.895957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.896322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.896360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.896739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.896769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.897134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.897166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.897427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.897457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.897825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.897863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.898263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.898293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.898633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.898671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.899006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.899042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.899409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.899438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.899796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.899826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.900221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.900252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.900616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.900645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.900975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.901007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.901375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.901405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.901772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.901801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.902187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.902218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.902556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.902587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.902995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.903025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.903400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.903430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.903804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.903834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.904217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.904247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.904621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.904651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.905014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.905045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.905385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.905413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.905765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.905794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.906151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.906182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.906544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.906573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.906936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.906966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.907336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.907366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.907703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.907732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.908094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.908127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.908450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.908480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.908834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.908875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.909249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-10-16 07:12:29.909281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-10-16 07:12:29.909534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.909571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.909917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.909951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.910331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.910360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.910727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.910757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.911102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.911132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.911462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.911492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.911852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.911883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.912266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.912295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.912580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.912609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.912970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.913001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.913367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.913397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.913750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.913779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.914161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.914191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.914576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.914605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.914949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.914979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.915339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.915369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.915753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.915782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.916140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.916170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.916433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.916462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.916823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.916860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.917258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.917289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.917677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.917708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.918083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.918114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.918352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.918384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.918759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.918789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.918982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.919012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.919370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.919399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.919762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.919800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.920156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.920187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.920553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.920583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.920932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.920964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.921227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.921257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.921678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.921707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.922083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.922116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.922528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.922559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.922910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.922940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.923379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.923409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.923781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-10-16 07:12:29.923811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-10-16 07:12:29.924191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-10-16 07:12:29.924222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-10-16 07:12:29.924588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-10-16 07:12:29.924619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-10-16 07:12:29.924988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-10-16 07:12:29.925019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-10-16 07:12:29.925376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-10-16 07:12:29.925406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-10-16 07:12:29.925754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-10-16 07:12:29.925782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-10-16 07:12:29.926125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-10-16 07:12:29.926157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-10-16 07:12:29.926494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-10-16 07:12:29.926524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-10-16 07:12:29.926898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-10-16 07:12:29.926930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.927305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.927337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.927690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.927722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.928056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.928086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.928459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.928490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.928840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.928883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.929210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.929239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.929604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.929633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.930017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.930047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.930338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.930366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.930735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.930764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.931132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.931162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.931542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.931572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.931929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.931961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-16 07:12:29.932328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-10-16 07:12:29.932368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.932730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.932760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.933100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.933130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.933387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.933416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.933757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.933786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.934149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.934181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.934527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.934557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.934928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.934960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.935309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.935340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.935753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.935783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.936152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.936182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.936524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.936555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.936899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.936931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.937280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.937309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.937640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.937669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.937907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.937940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.938165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.938199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.938576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.938608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.938976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.939007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.939354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.939385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.939747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.939777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.940189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.940220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.940547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.940577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.940945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.940979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.941334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.941366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.941722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.941752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.942124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.942156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.942412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.942443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.942784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.942814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.943068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.943100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.943468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.943499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.943868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.943905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.944207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.944236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.944680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.944708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.945075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.945107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.945470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.945499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.945854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.945891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.946258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.946289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.946651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.946680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-10-16 07:12:29.946993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-10-16 07:12:29.947022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.947386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.947416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.947783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.947812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.948176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.948208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.948573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.948603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.948954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.948993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.949401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.949431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.949786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.949815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.950157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.950188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.950570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.950600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.950976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.951006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.951377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.951409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.951779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.951809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.952200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.952230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.952581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.952611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.952970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.953008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.953343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.953375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.953740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.953771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.954128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.954161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.954498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.954528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.954869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.954899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.955256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.955286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.955652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.955683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.956038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.956070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.956339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.956374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.956743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.956772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.957100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.957131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.957481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.957510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.957890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.957922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.958285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.958314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.958672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.958701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.959065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.959095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.959419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.959449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.959685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.959714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.960119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.960150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.960504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.960534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.960877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.960909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.961192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.961221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.961573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.961603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.961963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.961994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-10-16 07:12:29.962443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-10-16 07:12:29.962472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.962827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.962867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.963216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.963245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.963654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.963683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.963985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.964016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.964404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.964433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.964789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.964818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.965186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.965228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.965603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.965633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.966065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.966096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.966445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.966477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.966878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.966909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.967250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.967283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.967665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.967694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.968072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.968103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.968468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.968497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.968788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.968817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.969241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.969271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.969650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.969679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.970089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.970120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.970375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.970404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.970762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.970791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.971054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.971085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.971453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.971483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.971814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.971852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.972193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.972225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.972576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.972607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.972968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.972998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.973375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.973405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.973748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.973778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.974050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.974081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.974430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.974460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.974790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.974820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.975189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.975219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.975574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.975603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.975897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.975927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.976302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.976332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.976699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.976729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.977103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-10-16 07:12:29.977133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-10-16 07:12:29.977493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.977522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.977778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.977810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.978178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.978209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.978455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.978487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.978890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.978922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.979304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.979333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.979686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.979717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.980092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.980123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.980484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.980514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.980885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.980917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.981261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.981291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.981631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.981662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.982028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.982059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.982285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.982330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.982665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.982695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.983039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.983070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.983424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.983453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.983782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.983813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.984190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.984221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.984575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.984605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.984960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.984991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.985285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.985314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.985708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.985737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.986096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.986127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.986486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.986517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.986749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.986777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.987147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.987178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.987537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.987567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.987920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.987950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.988286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.988315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.988673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.988703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.989077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.989115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.989552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.989581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.989921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.989953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.990323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.990352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.990708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.990741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.991080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.991112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.991448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.991478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.991836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.991876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.992279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-10-16 07:12:29.992308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-10-16 07:12:29.992680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.992715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.993072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.993104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.993477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.993506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.993865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.993896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.994264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.994293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.994617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.994648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.995000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.995033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.995393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.995423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.995783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.995812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.996159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.996189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.996553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.996581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.996932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.996964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.997309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.997337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.997647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.997677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.998043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.998075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.998328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.998357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.998797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.998825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.999206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.999245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.999512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.999540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:29.999901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:29.999933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.000303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.000333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.000579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.000608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.000977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.001009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.001308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.001338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.001714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.001746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.002766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.002799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.003087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.003118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.003478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.003514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.003761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.003790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.004212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.004243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.004596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.004627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.004884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.004915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.005246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.005276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-10-16 07:12:30.005547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-10-16 07:12:30.005577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.005929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.005959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.006231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.006259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.006695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.006725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.007134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.007170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.007570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.007601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.007977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.008010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.008373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.008403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.008820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.008860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.009288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.009318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.009688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.009720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.010152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.010184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.010414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.010445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.010631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.010663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.010902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.010934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.011200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.011229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.011470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.011501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.011812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.011842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.012113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.012143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.012456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.012498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.012717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.012758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.012974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.013028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.013247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.013292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.013533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.013580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.013971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.014004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.014399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.014429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.014829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.014883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.015162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.015192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.015573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.015601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.016002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.016032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.016475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.016505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.016877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.016909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.017327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.017357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.017703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.017735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.018096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.018128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.018542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.018573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.018915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.018945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.019349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.019378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.019750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.019781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.020155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.020186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.020540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.020572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-10-16 07:12:30.020918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-10-16 07:12:30.020950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.021305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.021334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.021603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.021632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.021998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.022031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.022392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.022421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.022780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.022809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.023181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.023214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.023566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.023598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.024010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.024044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.024398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.024429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.024781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.024811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.025099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.025130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.025485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.025514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.025795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.025824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.026193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.026223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.026482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.026510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.026905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.026938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.027318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.027347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.027711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.027740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.028011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.028042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.028425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.028455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.028819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.028888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.029258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.029288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.029619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.029648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.030012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.030043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.030420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.030449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.030811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.030840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.031153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.031185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.031554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.031584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.031976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.032007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.032337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.032367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.032719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.032750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.033100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.033131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.033499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.033530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.033894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.033924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.034366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.034397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.034760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.034790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.035163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.035193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.035452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.035486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.035828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.035870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.036234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.036265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-10-16 07:12:30.036640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-10-16 07:12:30.036671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.037012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.037043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.037436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.037466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.037802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.037833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.038199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.038230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.038573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.038603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.038956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.038987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.039390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.039426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.039595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.039624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.040019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.040066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.040360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.040408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.040793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.040841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.041183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.041227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.041625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.041666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.041954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.042000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.042298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.042353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.042647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.042699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.043034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.043081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.043390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.043424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.043800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.043831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.044207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.044236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.044499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.044528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.044885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.044915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.045315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.045344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.045709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.045741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.045933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.045964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.046364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.046395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.046635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.046665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.047043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.047073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.047461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.047491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.047862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.047896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.048164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.048195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.048564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.048593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.048892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.048923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.049309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.049338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.049702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.049732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.050107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.050139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.050531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.050561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.050906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-10-16 07:12:30.050938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-10-16 07:12:30.051174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.051203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.051442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.051470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.051719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.051748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.052126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.052157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.052415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.052445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.052831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.052871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.053151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.053182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.053533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.053562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.053917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.053947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.054317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.054347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.054668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.054700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.054957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.054987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.055395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.055425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.055798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.055827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.056219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.056251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.056658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.056690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.057029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.057061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.057434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.057464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.057836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.057877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.058297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.058326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.058699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.058728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.058990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.059023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.059393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.059423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.059735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.059765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.060155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.060185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.060552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.060582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.060860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.060891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.061154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.061183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.061528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.061559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.061813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.061854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.062218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.062246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.062465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.062494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.062871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.062902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.063279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.063308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.063684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-10-16 07:12:30.063713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-10-16 07:12:30.064095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.064127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.064496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.064532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.064881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.064913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.065259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.065289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.065643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.065672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.066062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.066094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.066430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.066460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.066821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.066860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.067349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.067378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.067638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.067667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.068015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.068045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.068422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.068452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.068691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.068720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.069073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.069104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.069461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.069492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.069840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.069893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.070255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.070287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.070558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.070588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.070970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.071002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.071272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.071302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.071664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.071693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.072069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.072102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.072496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.072529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.072874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.072905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.073254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.073287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.073647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.073677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.073919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.073955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.074347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.074380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.074761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.074798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.075249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.075282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.075510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.075541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.075924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.075957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.076221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.076252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.076605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.076635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.077007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.721 [2024-10-16 07:12:30.077037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.721 qpair failed and we were unable to recover it. 00:29:30.721 [2024-10-16 07:12:30.077382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.077413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.077638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.077671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.078079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.078110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.078467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.078497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.078864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.078894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.079140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.079169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.079566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.079595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.079959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.079992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.080349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.080380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.080916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.080950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.081319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.081350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.081615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.081644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.081986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.082019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.082294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.082323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.082705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.082737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.082991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.083023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.083307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.083336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.083532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.083565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.083754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.083803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.084120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.084173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.084444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.084506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.084859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.084908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.085196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.085255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.085570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.085618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.085949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.086003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.086210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.086243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.086526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.086557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.086997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.087029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.087405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.087434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.087789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.087821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.088171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.088203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.088552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.088583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.088935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.088965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.089357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.089389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.089736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.089768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.090144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.090176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.090465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.090495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.090860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.090893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.091264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.091294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.091661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.722 [2024-10-16 07:12:30.091691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.722 qpair failed and we were unable to recover it. 00:29:30.722 [2024-10-16 07:12:30.092038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.092071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.092425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.092454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.092819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.092856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.093222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.093251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.093584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.093614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.093987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.094020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.094404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.094435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.094782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.094820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.095187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.095219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.095576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.095607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.095962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.095992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.096358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.096389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.096757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.096790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712180 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Read completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 Write completed with error (sct=0, sc=8) 00:29:30.723 starting I/O failed 00:29:30.723 [2024-10-16 07:12:30.097652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.723 [2024-10-16 07:12:30.098072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.098134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.098518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.098563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.098934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.098966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.099354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.099385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.099806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.099836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.100282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.100313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.100662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.100693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.101130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.101161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.101526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.101557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.101911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.101943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.102237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.102267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.102630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.102661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.103084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.103115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.103476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.723 [2024-10-16 07:12:30.103507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.723 qpair failed and we were unable to recover it. 00:29:30.723 [2024-10-16 07:12:30.103803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.103832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.104237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.104267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.104526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.104555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.104924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.104956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.105356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.105386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.105747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.105779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.105964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.105994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.106385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.106415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.106663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.106691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.106973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.107008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.107157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.107189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.107577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.107610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.108024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.108055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.108490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.108519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.108913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.108946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.109220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.109253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.109660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.109694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.110184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.110216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.110569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.110601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.110894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.110958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.111343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.111374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.111722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.111752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.112103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.112136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.112517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.112549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.113052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.113082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.113436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.113467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.113771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.113802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.114171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.114210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.114564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.114595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.115001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.115032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.115317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.115348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.115708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.115738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.116149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.116181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.116437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.116467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.116825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.116863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.117135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.117166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.117552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.117581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.117818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.117856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.118223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.118254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.724 qpair failed and we were unable to recover it. 00:29:30.724 [2024-10-16 07:12:30.118613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.724 [2024-10-16 07:12:30.118643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.118904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.118935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.119323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.119354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.119734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.119763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.120141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.120172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.120556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.120587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.120864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.120895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.121141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.121170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.121592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.121622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.121981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.122013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.122386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.122419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.122780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.122809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.123207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.123238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.123666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.123696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.124048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.124079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.124266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.124296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.124660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.124691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.125050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.125082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.125470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.125499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.125739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.125769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.126037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.126077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.126412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.126443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.126797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.126828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.127121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.127152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.127537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.127568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.127835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.127879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.128275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.128305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.128651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.128680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.129106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.129146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.129498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.129529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.129960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.129991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.130178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.130207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.725 [2024-10-16 07:12:30.130604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.725 [2024-10-16 07:12:30.130633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.725 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.131015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.131047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.131301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.131335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.131686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.131718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.132087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.132118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.132371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.132400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.132785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.132816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.133063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.133095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.133448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.133478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.133710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.133738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.134137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.134168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.134406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.134439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.134833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.134889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.135154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.135187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.135467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.135497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.135838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.135883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.136245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.136276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.136523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.136553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.136923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.136956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.137300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.137330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.137712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.137742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.138105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.138141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.138552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.138582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.138938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.138970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.139360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.139390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.139632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.139662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.139940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.139971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.140338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.140369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.140743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.140775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.141186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.141217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.141556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.141585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.141814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.141853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.142303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.142332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.142581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.142609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.142992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.143023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.143455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.143486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.143885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.143925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.726 [2024-10-16 07:12:30.144399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.726 [2024-10-16 07:12:30.144429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.726 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.144565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.144595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.144864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.144918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.145257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.145304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.145796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.145831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.146321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.146352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.146691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.146721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.147096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.147127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.147522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.147551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.147858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.147889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.148252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.148281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.148593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.148622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.149007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.149038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.149406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.149436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.149794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.149824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.150202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.150231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.150599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.150629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.150997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.151027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.151393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.151422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.151768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.151797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.152162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.152192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.152558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.152587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.152958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.152990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.153373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.153403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.153744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.153774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.154147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.154176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.154537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.154566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.154945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.154976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.155342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.155373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.155721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.155751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.156092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.156122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.156483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.156513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.156873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.156903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.157070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.157104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.157462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.157492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.727 [2024-10-16 07:12:30.157864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.727 [2024-10-16 07:12:30.157894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.727 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.158167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.158196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.158589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.158618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.158969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.158999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.159375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.159410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.159764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.159793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.160135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.160167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.160505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.160533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.160882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.160913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.161302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.161331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.161695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.161723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.161948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.161981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.162330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.162361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.162732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.162761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.163178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.163208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.163579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.163609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.164000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.164029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.164403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.164432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.164912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.164944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.165177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.165205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.165581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.165610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.165974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.166004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.166368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.166397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.166775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.166805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.167208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.167239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.167677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.167707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.168040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.168072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.168439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.168468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.168833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.168870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.169274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.169304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.169663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.169691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.170087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.170118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.170468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.170497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.170866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.170896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.171129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.171162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.171458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.171487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.171757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.171787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.172170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-10-16 07:12:30.172201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-10-16 07:12:30.172592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.172621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.172987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.173018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.173281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.173310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.173565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.173597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.173979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.174010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.174378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.174408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.174778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.174812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.175195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.175227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.175587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.175616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.175864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.175894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.176255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.176284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.176651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.176680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.176943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.176973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.177333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.177363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.177715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.177743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.178087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.178116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.178476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.178505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.178927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.178958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.179311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.179340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.179699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.179728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.180076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.180106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.180472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.180501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.180932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.180963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.181313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.181342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.181594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.181623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.181867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.181899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.182257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.182286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.182648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.182677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.183034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.183073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.183436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.183465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.183745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.183773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.184109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.184141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-10-16 07:12:30.184396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-10-16 07:12:30.184425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.184792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.184822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.185093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.185122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.185485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.185515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.185877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.185909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.186290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.186320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.186561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.186592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.186937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.186966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.187317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.187346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.187709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.187738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.188108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.188139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.188517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.188547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.188916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.188946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.189205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.189234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.189467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.189504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.189876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.189907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.190255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.190285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.190583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.190613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.191045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.191075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.191459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.191488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.191782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.191811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.192189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.192219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.192596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.192626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.192997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.193028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.193387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.193416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.193777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.193805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.194176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.194207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.194573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.194601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.194955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.194988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.195354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.195382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.195746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.195774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.196128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.196158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.196518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.196547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.196766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.196797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.197202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-10-16 07:12:30.197232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-10-16 07:12:30.197560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-10-16 07:12:30.197592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-10-16 07:12:30.197958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-10-16 07:12:30.197988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-10-16 07:12:30.198351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-10-16 07:12:30.198380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-10-16 07:12:30.198793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-10-16 07:12:30.198822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-10-16 07:12:30.199219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-10-16 07:12:30.199249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-10-16 07:12:30.199515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-10-16 07:12:30.199543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-10-16 07:12:30.199884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-10-16 07:12:30.199917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-10-16 07:12:30.200293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-10-16 07:12:30.200323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:31.003 [2024-10-16 07:12:30.200696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-10-16 07:12:30.200728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-10-16 07:12:30.201102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.201132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.201498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.201528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.201912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.201942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.202205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.202234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.202615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.202644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.202907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.202935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.203330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.203360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.203730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.203758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.204111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.204141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.204505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.204534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.204838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.204886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.205124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.205156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.205531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.205560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.205923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.205954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.206327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.206356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.206724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.206752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.207186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.207216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.207631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.207660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.208006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.208037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.208435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.208464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.208826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.208863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.209209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.209239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.209605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.209636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.210073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.210103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.210450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.210480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.210857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.210887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.211254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.211284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.211641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.211672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.212030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.212059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.212410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.212440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.212813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.212841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.213264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.213292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.213658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.213687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.213950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.213979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.214360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.214389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.214754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.214782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.215176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.215206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.215575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.215605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-10-16 07:12:30.215840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-10-16 07:12:30.215881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.216242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.216272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.216631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.216660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.217113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.217144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.217480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.217510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.217865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.217895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.218260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.218289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.218667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.218697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.218939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.218973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.219349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.219377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.219739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.219768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.220144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.220173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.220559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.220595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.220957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.220987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.221354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.221382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.221752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.221780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.222111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.222142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.222500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.222528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.222767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.222797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.223157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.223189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.223549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.223577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.223962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.223992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.224358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.224387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.224715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.224744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.225137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.225166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.225414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.225446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.225858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.225890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.226244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.226273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.226635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.226663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.227029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.227059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.227433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.227462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.227828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.227871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.228261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.228290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.228629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.228657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.229008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.229038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.229419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.229448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.229696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.229727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.230113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.230144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.230511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.230540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.230905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.230935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.231291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-10-16 07:12:30.231320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-10-16 07:12:30.231694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.231722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.232115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.232146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.232491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.232522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.232873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.232902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.233282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.233311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.233659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.233689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.234068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.234097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.234458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.234488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.234859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.234889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.235246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.235275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.235648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.235676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.236035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.236070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.236447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.236475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.236832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.236871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.237219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.237248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.237606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.237635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.238000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.238030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.238389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.238418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.238794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.238824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.239188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.239218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.239577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.239606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.239999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.240028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.240389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.240417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.240669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.240702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.241036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.241065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.241437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.241467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.241819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.241869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.242259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.242287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.242647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.242677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.243007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.243040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.243385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.243416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.243774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.243804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.244165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.244195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.244435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.244463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.244726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.244758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.245171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.245201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.245390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.245421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.245804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.245834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.246279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.246309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.246665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-10-16 07:12:30.246693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-10-16 07:12:30.247071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.247100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.247454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.247482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.247857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.247889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.248244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.248274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.248646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.248674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.248984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.249013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.249433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.249462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.249786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.249815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.250212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.250242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.250590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.250620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.250919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.250948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.251300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.251336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.251600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.251628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.251895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.251927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.252282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.252311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.252666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.252695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.253002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.253033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.253393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.253422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.253676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.253704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.254057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.254087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.254466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.254496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.254862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.254895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.255244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.255273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.255654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.255684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.256116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.256145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.256507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.256537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.256917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.256946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.257291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.257319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.257705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.257733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.258096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.258126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.258487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.258517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.258885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.258915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.259275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.259304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.259720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.259749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.260175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.260204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.260580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.260609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.260974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.261005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.261267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.261299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.261684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.261713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.262076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.262107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-10-16 07:12:30.262360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-10-16 07:12:30.262391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.262744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.262773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.263120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.263150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.263514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.263542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.263904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.263934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.264291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.264320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.264621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.264649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.264993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.265023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.265256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.265288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.265641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.265669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.266037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.266067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.266437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.266465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.266837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.266879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.267269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.267298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.267655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.267684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.268066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.268095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.268436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.268464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.268744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.268773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.269173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.269202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.269575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.269603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.269867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.269897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.270296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.270325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.270708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.270736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.271098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.271128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.271493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.271521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.271814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.271842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.272225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.272254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.272623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.272652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.272923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.272951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.273319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.273347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.273604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.273632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.274004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.274033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.274462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.274490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.274852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.274882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-10-16 07:12:30.275235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-10-16 07:12:30.275263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.275672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.275702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.276078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.276107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.276490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.276518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.276864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.276899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.277237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.277265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.277596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.277624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.277987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.278017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.278399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.278428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.278798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.278826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.279237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.279266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.279646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.279677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.280040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.280069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.280429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.280456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.280819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.280861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.281231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.281259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.281493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.281523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.281797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.281826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.282262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.282291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.282638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.282666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.283031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.283061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.283420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.283449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.283813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.283842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.284179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.284208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.284586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.284615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.284977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.285007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.285374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.285403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.285765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.285795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.286169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.286199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.286461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.286489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.286854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.286884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.287229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.287258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.287619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.287647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.288022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.288052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.288297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.288329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.288711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.288739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.289140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.289170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.289430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.289458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.289690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.289719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.289958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.289991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-10-16 07:12:30.290380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-10-16 07:12:30.290408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.290766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.290796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.291205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.291235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.291595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.291623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.291985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.292022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.292389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.292418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.292775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.292803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.293027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.293059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.293462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.293491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.293855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.293885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.294134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.294162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.294421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.294450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.294803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.294832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.295241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.295271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.295642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.295670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.296038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.296068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.296435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.296464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.296834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.296873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.297220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.297251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.297506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.297538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.297939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.297969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.298345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.298375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.298726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.298755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.299095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.299125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.299482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.299511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.299867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.299898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.300259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.300287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.300665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.300694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.301081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.301112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.301455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.301483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.301835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.301872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.302212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.302241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.302581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.302609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.302858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.302890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.303249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.303278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.303635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.303663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.304030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.304061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.304490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.304518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.304873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.304903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.305267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.305295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-10-16 07:12:30.305658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-10-16 07:12:30.305687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.306056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.306085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.306434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.306463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.306821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.306860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.307115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.307149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.307527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.307557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.307865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.307897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.308224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.308252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.308616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.308644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.309017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.309047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.309396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.309424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.309795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.309824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.310125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.310157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.310505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.310533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.310890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.310920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.311177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.311208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.311599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.311627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.311894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.311923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.312273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.312302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.312666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.312694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.313067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.313097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.313456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.313485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.313852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.313884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.314230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.314259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.314646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.314675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.315052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.315081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.315417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.315445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.315808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.315837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.316113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.316143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.316500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.316530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.316894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.316924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.317290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.317319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.317676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.317704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.318082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.318112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.318473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.318501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.318841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.318879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.319269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.319298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.319663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.319691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.319978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.320008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.320247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.320278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.320676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-10-16 07:12:30.320705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-10-16 07:12:30.321082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.321115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.321497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.321526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.321887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.321920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.322332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.322370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.322700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.322729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.323077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.323109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.323409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.323437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.323796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.323824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.324201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.324230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.324595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.324623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.325039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.325071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.325427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.325457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.325831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.325874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.326213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.326244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.326619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.326648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.327019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.327051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.327442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.327471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.327832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.327886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.328324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.328354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.328641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.328669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.329060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.329093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.329453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.329482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.329835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.329875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.330130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.330159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.330413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.330448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.330889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.330920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.331297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.331326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.331564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.331592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.331864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.331895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.332290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.332319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.332685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.332715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.333055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.333086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.333448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.333477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.333839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.333880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-10-16 07:12:30.334331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-10-16 07:12:30.334361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.334716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.334746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.335102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.335132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.335485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.335514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.335953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.335984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.336347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.336376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.336768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.336799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.337238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.337270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.337627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.337655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.338024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.338062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.338409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.338439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.338790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.338819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.339238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.339268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.339520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.339548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.339903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.339933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.340239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.340269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.340551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.340580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.340952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.340983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.341254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.341285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.341618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.341650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.341994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.342026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.342377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.342409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.342778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.342809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.343186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.343217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.343560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.343592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.343947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.343978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.344329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.344358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.344699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.344727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.345066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.345097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.345426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.345456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.345818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.345860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.346100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.346129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.346531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.346560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.346916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.346949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.347292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.347321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.347676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.347705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.348088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.348119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.348556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.348588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.348930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.348959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.349334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.349363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-10-16 07:12:30.349723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-10-16 07:12:30.349754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.350101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.350130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.350489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.350518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.350776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.350806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.351178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.351210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.351575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.351606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.351971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.352003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.352366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.352394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.352765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.352795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.353144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.353181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.353517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.353548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.353925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.353955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.354322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.354352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.354714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.354743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.355046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.355076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.355449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.355480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.355840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.355893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.356140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.356172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.356543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.356573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.356943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.356974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.357329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.357357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.357609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.357640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.357916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.357947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.358312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.358342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.358714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.358744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.359092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.359125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.359488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.359520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.359776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.359807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.360259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.360291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.360621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.360650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.361019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.361052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.361420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.361449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.361806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.361837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.362212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.362242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.362578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.362608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.362999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.363029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.363410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.363440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.363809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.363839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.364194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.364223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.364583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.364613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-10-16 07:12:30.364970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-10-16 07:12:30.365001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.365355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.365384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.365723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.365754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.366107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.366137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.366478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.366507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.366771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.366800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.367185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.367216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.367554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.367583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.367944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.367974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.368351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.368386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.368738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.368770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.369123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.369153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.369515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.369547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.369908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.369939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.370293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.370325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.370751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.370780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.371041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.371071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.371440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.371469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.371862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.371893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.372277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.372306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.372582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.372611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.372993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.373024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.373387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.373416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.373777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.373806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.374172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.374204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.374562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.374592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.374966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.374997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.375365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.375395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.375757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.375788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.376038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.376072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.376476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.376506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.376858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.376891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.377242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.377272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.377646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.377674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.378031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.378064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.378448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.378479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.378855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.378886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.379257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.379288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.379627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.379657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.379901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.379934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.380201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-10-16 07:12:30.380230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-10-16 07:12:30.380566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.380595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.380966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.380996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.381443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.381473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.381821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.381862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.382222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.382251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.382617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.382646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.383013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.383042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.383393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.383422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.383826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.383878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.384247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.384276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.384643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.384671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.385056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.385085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.385449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.385478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.385820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.385860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.386214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.386242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.386617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.386646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.387018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.387047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.387458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.387486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.387775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.387803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.388190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.388220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.388583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.388613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.388979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.389008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.389403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.389432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.389778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.389808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.390189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.390219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.390462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.390490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.390773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.390801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.391186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.391217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.391551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.391578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.391929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.391959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.392192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.392223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.392656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.392684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.393030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.393059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.393410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.393439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.393802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.393830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.394274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-10-16 07:12:30.394304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-10-16 07:12:30.394552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.394584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.394973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.395003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.395413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.395441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.395880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.395911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.396261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.396291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.396636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.396665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.397021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.397050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.397417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.397446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.397801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.397831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.398191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.398219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.398615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.398643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.399035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.399064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.399422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.399457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.399811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.399839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.400222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.400251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.400612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.400641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.400991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.401022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.401397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.401426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.401824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.401860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.402115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.402143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.402525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.402554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.402807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.402836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.403221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.403251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.403615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.403644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.404010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.404040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.404418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.404446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.404786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.404816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.405214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.405244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.405612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.405642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.405992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.406022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.406391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.406419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.406781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.406809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.407163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.407192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.407425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.407453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.407822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.407873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.408204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.408235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.408594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.408623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.408969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.409000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.409366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.409394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.409767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-10-16 07:12:30.409797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-10-16 07:12:30.410052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.410081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.410426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.410454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.410819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.410857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.411208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.411236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.411615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.411643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.412013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.412044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.412384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.412412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.412757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.412786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.413191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.413220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.413579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.413609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.413957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.413987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.414347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.414376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.414734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.414768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.415040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.415070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.415446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.415474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.415727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.415755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.415997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.416027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.416388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.416416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.416670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.416697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.417044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.417073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.417421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.417449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.417814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.417842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.418220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.418249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.418615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.418644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.418912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.418942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.419284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.419313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.419659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.419687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.420033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.420064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.420440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.420468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.420835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.420874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.421077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.421108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.421493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.421521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.421760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.421788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.422276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.422306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.422551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.422581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.422961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.422991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.423363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.423392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.423741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.423770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.424193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.424223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.424573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-10-16 07:12:30.424602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-10-16 07:12:30.424967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.424996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.425274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.425303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.425683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.425711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.426079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.426108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.426450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.426478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.426839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.426877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.427231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.427259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.427640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.427669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.428033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.428063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.428429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.428459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.428831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.428869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.429219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.429249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.429619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.429654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.430036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.430065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.430419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.430447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.430819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.430856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.431215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.431244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.431491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.431523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.431892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.431923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3312725 Killed "${NVMF_APP[@]}" "$@" 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.432283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.432313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.432667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.432696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:31.019 [2024-10-16 07:12:30.433052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.433081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:31.019 [2024-10-16 07:12:30.433443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.433471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:31.019 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:31.019 [2024-10-16 07:12:30.433868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.433898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.019 [2024-10-16 07:12:30.434263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.434292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.434642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.434672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.435039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.435069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.435414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.435444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.435822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.435863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.436283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.436313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.436585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.436616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.436880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.436912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.437256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.437285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.437662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.437690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.438076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.438106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.438493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.438523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.438862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.438899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.439340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-10-16 07:12:30.439368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-10-16 07:12:30.439630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.439658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.440079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.440109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.440480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.440508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.440753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.440784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.441081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.441113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.441464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.441492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.441854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.441885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3313759 00:29:31.020 [2024-10-16 07:12:30.442136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.442174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3313759 00:29:31.020 [2024-10-16 07:12:30.442521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.442550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:31.020 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3313759 ']' 00:29:31.020 [2024-10-16 07:12:30.442891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.442922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.020 [2024-10-16 07:12:30.443295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.443325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:31.020 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.020 [2024-10-16 07:12:30.443698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.443727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:31.020 [2024-10-16 07:12:30.444003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.444038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 07:12:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.020 [2024-10-16 07:12:30.444370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.444401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.444688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.444719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.445091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.445122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.445514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.445544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.445768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.445798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.446187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.446219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.446578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.446607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.446882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.446920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.447291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.447322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.447698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.447727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.448097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.448127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.448493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.448523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.448880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.448911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.449275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.449305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.449669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.449699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.450037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.450067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.450431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.450460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.450811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.450840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.451231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.451262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.451630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.451659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.452034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.452065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.452419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.452450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.452815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-10-16 07:12:30.452863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-10-16 07:12:30.453128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.453158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.453527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.453556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.453916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.453947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.454299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.454330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.454683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.454713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.455087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.455119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.455497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.455527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.455881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.455912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.456153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.456186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.456543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.456575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.456922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.456954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.457315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.457345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.457709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.457738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.458083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.458113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.458344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.458374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.458758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.458787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.459165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.459196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.459548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.459578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.459818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.459863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.460133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.460163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.460539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.460567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.460928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.460959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.461418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.461449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.461705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.461733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.462100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.462138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.462478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.462508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.462842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.462883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.463328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.463358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.463704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.463733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.464090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.464120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.464253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.464285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.464624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.464653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.464905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.464935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.465196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.465225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.465573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-10-16 07:12:30.465603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-10-16 07:12:30.465878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.465908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.466306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.466335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.466593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.466622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.467015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.467046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.467408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.467437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.467789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.467818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.468226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.468256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.468493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.468521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.468937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.468970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.469358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.469388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.469810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.469839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.470126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.470159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.470530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.470558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.470702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.470733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.470981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.471012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.471386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.471417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.471789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.471819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.472260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.472290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.472693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.472722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.473142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.473173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.473536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.473565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.473960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.473990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.474349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.474379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.474743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.474772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.475132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.475162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.475504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.475535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.475900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.475930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.476388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.476416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.476885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.476917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.477172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.477208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.477575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.477604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.477915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.477954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.478308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.478337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.478702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.478733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.479107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.479138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.479398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.479427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.479834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.479874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.480345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.480375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.480633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.480662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.481013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-10-16 07:12:30.481044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-10-16 07:12:30.481322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.481351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.481715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.481743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.482178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.482209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.482642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.482672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.483049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.483078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.483448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.483477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.483738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.483767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.484029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.484060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.484456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.484486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.484861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.484892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.485250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.485279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.485645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.485674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.486045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.486075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.486443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.486471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.486841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.486882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.487246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.487276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.487633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.487663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.488004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.488034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.488397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.488425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.488796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.488825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.489094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.489124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.489472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.489501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.489751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.489779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.490147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.490178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.490545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.490574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.490838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.490878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.491235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.491265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.491628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.491656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-10-16 07:12:30.492024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-10-16 07:12:30.492054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.492420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.492460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.492829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.492871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.493234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.493265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.493622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.493651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.494019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.494049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.494424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.494452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.494840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.494883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.495284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.495313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.495667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.495695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.496064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.496094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.496547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.496575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.496907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.496938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.497313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.497342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.497614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.497643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.498012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.498042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.498408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.498436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.498806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.498836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.499209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.499238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 [2024-10-16 07:12:30.499237] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.499301] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.297 [2024-10-16 07:12:30.499609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.499638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.499901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.499929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.500167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.500196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.500555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.500586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.500953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.500984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.501373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.501402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.501769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.501799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.502163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.502195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.502453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.502483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.502860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.502891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.503128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.503161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.503575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.503605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.503965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.503997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.297 [2024-10-16 07:12:30.504399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.297 [2024-10-16 07:12:30.504429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.297 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.504668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.504697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.504943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.504976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.505231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.505261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.505509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.505542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.505889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.505920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.506287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.506317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.506672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.506701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.506966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.507003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.507375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.507404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.507765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.507795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.508159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.508190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.508551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.508580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.508957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.508988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.509357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.509387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.509774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.509804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.510190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.510221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.510585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.510615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.510977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.511007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.511414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.511443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.511811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.511841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.512295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.512324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.512550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.512582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.512965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.512996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.513261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.513289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.513652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.513682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.513938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.513968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.514321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.514351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.514693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.514722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.515066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.515096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.515465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.515493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.515854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.515886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.516331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.516361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.516657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.516687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.517021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.517052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.517335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.517363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.517600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.517633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.517995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.518027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.518426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.518455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.518861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.518891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.298 [2024-10-16 07:12:30.519247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.298 [2024-10-16 07:12:30.519278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.298 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.519634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.519662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.520009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.520039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.520381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.520412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.520782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.520811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.521185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.521216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.521578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.521608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.521970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.522001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.522405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.522440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.522783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.522813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.523205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.523235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.523673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.523702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.524097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.524128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.524482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.524511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.524767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.524796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.525210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.525240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.525627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.525656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.526025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.526057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.526445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.526473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.526836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.526878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.527235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.527265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.527636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.527664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.528078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.528110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.528484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.528514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.528879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.528911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.529263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.529294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.529663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.529693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.529867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.529900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.530281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.530311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.530585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.530614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.530981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.531013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.531398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.531427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.531786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.531815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.532166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.532196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.532559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.532588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.532969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.532999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.533248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.533280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.533663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.533691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.534070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.534100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.534458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.534487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.534864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.299 [2024-10-16 07:12:30.534895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.299 qpair failed and we were unable to recover it. 00:29:31.299 [2024-10-16 07:12:30.535156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.535185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.535563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.535592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.535975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.536006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.536352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.536381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.536759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.536788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.537152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.537182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.537540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.537569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.537796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.537836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.538074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.538104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.538490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.538520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.538916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.538948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.539307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.539339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.539678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.539707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.540001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.540031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.540401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.540431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.540795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.540823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.541215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.541246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.541623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.541653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.541919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.541949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.542341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.542370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.542731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.542760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.543110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.543142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.543514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.543543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.543912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.543942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.544323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.544352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.544694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.544722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.545077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.545108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.545376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.545404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.545792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.545820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.546209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.546239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.546592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.546622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.546994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.547025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.547393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.547423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.547762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.547791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.548173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.548203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.548569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.548597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.548968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.548999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.549446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.549476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.549836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.549877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.550247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.300 [2024-10-16 07:12:30.550276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.300 qpair failed and we were unable to recover it. 00:29:31.300 [2024-10-16 07:12:30.550637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.550666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.550924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.550955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.551324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.551352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.551716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.551744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.552105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.552135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.552395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.552423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.552805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.552834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.553088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.553123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.553419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.553447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.553819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.553856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.554110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.554138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.554507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.554535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.554902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.554932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.555274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.555302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.555553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.555586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.555873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.555903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.556277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.556304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.556554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.556584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.557009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.557039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.557394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.557424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.557789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.557818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.558223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.558253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.558639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.558668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.559038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.559068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.559410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.559438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.559802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.559831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.560253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.560284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.560682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.560710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.561129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.561160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.561524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.561552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.561921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.561952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.562225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.562254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.562621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.562651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.563013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.563044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.301 qpair failed and we were unable to recover it. 00:29:31.301 [2024-10-16 07:12:30.563399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.301 [2024-10-16 07:12:30.563430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.563783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.563811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.564194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.564225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.564460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.564489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.564883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.564915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.565322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.565351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.565776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.565805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.566180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.566211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.566566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.566595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.566960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.566991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.567297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.567327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.567683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.567712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.568104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.568134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.568497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.568533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.568899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.568929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.569283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.569313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.569558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.569588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.569963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.569994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.570359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.570389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.570750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.570779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.571145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.571175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.571539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.571570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.571938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.571970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.572363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.572393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.572750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.572781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.573130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.573161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.573449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.573480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.573841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.573887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.574279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.574308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.574681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.574718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.575083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.575114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.575482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.575512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.575879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.575910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.576284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.576313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.576663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.576695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.576881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.576912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.577274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.577304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.577681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.577711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.578112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.578142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.302 [2024-10-16 07:12:30.578504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.302 [2024-10-16 07:12:30.578533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.302 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.578902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.578934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.579329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.579358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.579718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.579752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.580098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.580131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.580572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.580603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.580884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.580914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.581273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.581304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.581680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.581710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.582089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.582118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.582457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.582486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.582869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.582902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.583256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.583288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.583667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.583697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.584157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.584195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.584533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.584563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.584934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.584965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.585306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.585336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.585604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.585636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.586042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.586075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.586442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.586472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.586832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.586879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.587155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.587185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.587447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.587480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.587535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.303 [2024-10-16 07:12:30.587861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.587893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.588257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.588289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.588696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.588727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.589105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.589144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.589498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.589528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.589900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.589932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.590300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.590331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.590591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.590619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.590969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.591000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.591362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.591394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.591785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.591815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.592199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.592230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.592580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.592612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.592873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.592906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.593148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.593178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.593426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.593457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.303 qpair failed and we were unable to recover it. 00:29:31.303 [2024-10-16 07:12:30.593802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.303 [2024-10-16 07:12:30.593832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.594205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.594237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.594492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.594525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.594926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.594956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.595206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.595236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.595606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.595636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.595988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.596022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.596369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.596401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.596665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.596695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.597071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.597103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.597474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.597505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.597759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.597791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.598046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.598089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.598426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.598458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.598808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.598839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.599232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.599262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.599690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.599720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.599962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.599995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.600354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.600385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.600746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.600777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.601138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.601170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.601538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.601568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.601922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.601952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.602334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.602364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.602746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.602783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.603026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.603058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.603412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.603445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.603782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.603812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.604246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.604280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.604526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.604557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.604933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.604965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.605227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.605260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.605636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.605667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.606022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.606056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.606429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.606461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.606824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.606868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.607261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.607293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.607556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.607586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.607932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.607965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.608354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.608385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.304 [2024-10-16 07:12:30.608723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.304 [2024-10-16 07:12:30.608754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.304 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.609043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.609075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.609413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.609445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.609805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.609835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.610095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.610129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.610584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.610613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.610953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.610985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.611344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.611374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.611716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.611746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.611906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.611940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.612285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.612315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.612655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.612692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.613017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.613049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.613420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.613452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.613915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.613953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.614334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.614363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.614625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.614656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.614900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.614930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.615334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.615364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.615616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.615645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.616010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.616040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.616404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.616433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.616808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.616837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.617119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.617150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.617543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.617574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.617942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.617972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.618346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.618376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.618729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.618759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.619221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.619253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.619609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.619639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.619915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.619946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.620286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.620315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.620547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.620580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.620921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.620952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.621320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.621349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.621689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.305 [2024-10-16 07:12:30.621719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.305 qpair failed and we were unable to recover it. 00:29:31.305 [2024-10-16 07:12:30.622078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.622109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.622366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.622395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.622774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.622804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.623108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.623140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.623512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.623544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.623912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.623943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.624305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.624335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.624721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.624752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.624988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.625021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.625376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.625404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.625810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.625841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.626277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.626307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.626558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.626586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.626964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.626997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.627430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.627461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.627821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.627866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.628233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.628267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.628633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.628663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.629007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.629045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.629387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.629417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.629782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.629812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.630107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.630136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.630509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.630539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.630907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.630938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.631315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.631345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.631720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.631750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.632082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.632119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.632461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.632490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.632841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.632880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.633253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.633281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.633647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.633677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.633927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.633956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.634202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.634234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.634608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.634637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.635000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.635032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.635370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.635399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.635742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.635771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.636165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.636195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.636558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.636588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.636955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.636985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.306 qpair failed and we were unable to recover it. 00:29:31.306 [2024-10-16 07:12:30.637350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.306 [2024-10-16 07:12:30.637379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.637746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.637775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.638082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.638111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.638348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.638376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.638747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.638775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.639201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.639232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.639629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.639659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.640016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.640048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.640389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.640419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.640781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.640810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.641092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.641125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.641371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.307 [2024-10-16 07:12:30.641416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.307 [2024-10-16 07:12:30.641424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.307 [2024-10-16 07:12:30.641431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.307 [2024-10-16 07:12:30.641438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.307 [2024-10-16 07:12:30.641484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.641513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.641870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.641899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.642163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.642192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.642551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.642579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.642924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.642956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.643206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.643241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.643462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:31.307 [2024-10-16 07:12:30.643618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.643647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.643618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:31.307 [2024-10-16 07:12:30.643800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:31.307 [2024-10-16 07:12:30.643801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:31.307 [2024-10-16 07:12:30.643997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.644027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.644394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.644423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.644812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.644841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.645220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.645249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.645616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.645644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.645980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.646009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.646373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.646402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.646771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.646800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.647056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.647086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.647439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.647469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.647706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.647744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.648017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.648048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.648327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.648355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.648584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.648613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.648895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.648925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.649232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.649259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.649538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.649566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.649901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.649932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.650309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.307 [2024-10-16 07:12:30.650338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.307 qpair failed and we were unable to recover it. 00:29:31.307 [2024-10-16 07:12:30.650701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.650729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.651094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.651125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.651459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.651488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.651736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.651765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.652161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.652191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.652543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.652572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.652966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.652995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.653370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.653399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.653778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.653807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.654201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.654231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.654600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.654630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.654894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.654926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.655355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.655384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.655739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.655768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.656046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.656076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.656299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.656327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.656700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.656729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.656961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.656990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.657381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.657411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.657759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.657789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.658137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.658166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.658532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.658562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.658823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.658864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.659269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.659297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.659534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.659564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.659828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.659874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.660116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.660147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.660521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.660550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.660800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.660829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.661211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.661241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.661593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.661621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.661986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.662022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.662204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.662234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.662625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.662654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.663036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.663065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.663303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.663331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.663722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.663751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.664107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.664138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.664500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.664528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.664891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.664922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.308 qpair failed and we were unable to recover it. 00:29:31.308 [2024-10-16 07:12:30.665301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.308 [2024-10-16 07:12:30.665330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.665693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.665721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.666091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.666121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.666457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.666486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.666854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.666884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.667221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.667249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.667635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.667664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.667920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.667949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.668323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.668352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.668612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.668641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.668971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.669001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.669369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.669398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.669769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.669798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.670174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.670205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.670580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.670609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.670966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.670996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.671371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.671401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.671771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.671801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.672195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.672227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.672586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.672616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.673056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.673087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.673362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.673391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.673825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.673864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.674208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.674239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.674595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.674625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.674973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.675005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.675373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.675402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.675753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.675784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.676218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.676247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.676475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.676505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.676756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.676789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.677072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.677112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.677354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.677384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.677636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.677667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.678034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.678065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.678443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.678472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.678736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.678764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.679053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.679084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.679454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.679482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.679857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.679888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.309 [2024-10-16 07:12:30.680160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.309 [2024-10-16 07:12:30.680190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.309 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.680517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.680546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.680683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.680715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.681097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.681130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.681492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.681521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.681909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.681941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.682326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.682356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.682728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.682758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.683025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.683056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.683304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.683334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.683715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.683745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.684145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.684175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.684535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.684562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.684951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.684981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.685344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.685373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.685635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.685665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.685922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.685952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.686229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.686257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.686517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.686547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.686902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.686933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.687312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.687340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.687586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.687614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.687966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.687995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.688368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.688396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.688756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.688784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.689150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.689180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.689422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.689450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.689670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.689698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.689861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.689893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.690347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.690376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.690734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.690762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.691140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.691183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.691563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.691591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.691832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.691872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.692236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.692264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.692428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.692457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.692826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.310 [2024-10-16 07:12:30.692865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.310 qpair failed and we were unable to recover it. 00:29:31.310 [2024-10-16 07:12:30.693202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.693232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.693469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.693497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.693901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.693931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.694313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.694342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.694732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.694761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.694990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.695020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.695354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.695383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.695753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.695782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.696198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.696229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.696582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.696611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.696828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.696865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.697255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.697284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.697651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.697680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.698022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.698052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.698400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.698429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.698786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.698816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.699177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.699207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.699477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.699505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.699924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.699954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.700340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.700370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.700708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.700737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.701097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.701128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.701374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.701405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.701777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.701807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.702111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.702140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.702484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.702512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.702876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.702906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.703148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.703177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.703515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.703543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.703902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.703931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.704182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.704210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.704565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.704593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.704969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.705000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.705382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.705411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.705784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.705830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.706123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.706153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.706400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.706427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.706651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.706679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.707054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.707085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.707453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.311 [2024-10-16 07:12:30.707482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.311 qpair failed and we were unable to recover it. 00:29:31.311 [2024-10-16 07:12:30.707841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.707884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.708118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.708149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.708521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.708549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.708976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.709007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.709378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.709407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.709768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.709800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.710198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.710228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.710472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.710502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.710875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.710906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.711330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.711361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.711723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.711752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.712124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.712155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.712523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.712552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.712925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.712954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.713329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.713359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.713761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.713791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.714165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.714195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.714419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.714448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.714666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.714696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.714961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.714991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.715420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.715449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.715552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.715588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.715842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.715882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.716239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.716268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.716632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.716660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.717016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.717045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.717291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.717319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.717644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.717673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.718023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.718053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.718405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.718432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.718641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.718669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.719082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.719111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.719349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.719378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.719753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.719782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.720032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.720062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.720442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.720471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.720724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.720753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.721155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.721185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.721522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.721551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.721759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.312 [2024-10-16 07:12:30.721787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.312 qpair failed and we were unable to recover it. 00:29:31.312 [2024-10-16 07:12:30.722156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.722188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.722560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.722589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.723003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.723033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.723383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.723413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.723672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.723701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.723957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.723986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.724426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.724456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.724692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.724720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.725100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.725130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.725490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.725519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.725735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.725764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.726148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.726177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.726543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.726571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.726859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.726888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.727249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.727278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.727648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.727678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.728058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.728088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.728318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.728346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.728699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.728727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.728989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.729019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.729229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.729260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.729637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.729676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.730036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.730067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.730430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.730460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.730687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.730718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.731100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.731129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.731471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.731501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.731864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.731893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.732137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.732165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.732541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.732572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.732854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.732883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.733228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.733257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.733524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.733552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.733907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.733937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.734305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.734335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.734460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.734490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.734833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.734875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.735098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.735126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.735372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.735401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.313 [2024-10-16 07:12:30.735768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.313 [2024-10-16 07:12:30.735795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.313 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.736186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.736216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.736460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.736488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.736829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.736869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.737253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.737283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.737655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.737683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.737961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.737991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.738348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.738378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.738733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.738762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.739059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.739089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.739429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.739459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.739826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.739866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.740082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.740111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.740479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.740507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.740877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.740906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.741339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.741368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.741602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.741633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.741993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.742023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.742287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.742316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.742561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.742594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.742967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.742997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.743356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.743386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.743494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.743530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.743876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.743906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.744161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.744191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.744546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.744575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.744900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.744929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.745175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.745204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.745575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.745603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.745827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.745905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.746314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.746344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.746700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.746728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.747100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.747130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.747494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.747522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.747767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.747796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.748223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.748254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.314 [2024-10-16 07:12:30.748600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-10-16 07:12:30.748630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.314 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.748852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.748885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.749296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.749324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.749666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.749693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.750042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.750071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.750440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.750468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.750839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.750878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.751246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.751274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.751637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.751665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.752029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.752060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.752285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.752313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.752763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.752791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.753146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.753176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.753543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.753571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.753803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.753831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.753949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.753978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.754335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.754365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.754626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.754654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.754997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.755026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.755337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.755367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.755739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.755767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.756136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.756166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.756533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.756563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.756924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.756953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.757321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.757351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.757583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.757611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.758047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.758083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.758458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.758487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.758872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.758901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.759126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.759154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.759509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.759537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.759748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.759776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.760161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.760191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.760580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.760608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.760859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.760891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.761123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.761152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.761524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.761553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.761927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.761957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.762106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.762134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.762551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.762579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.762958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-10-16 07:12:30.762989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-10-16 07:12:30.763208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.763237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.763572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.763601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.764010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.764040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.764394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.764424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.764795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.764823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.765048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.765078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.765465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.765495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.765753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.765781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.766141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.766171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.766539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.766568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.766798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.766826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.767234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.767265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.767625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.767654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.767914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.767944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.768305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.768333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.768691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.768720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.769071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.769101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.769249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.769280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.769542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.769570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.769810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.769839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.770193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.770222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.770603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.770631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.770825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.770864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.771012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.771041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.771454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.771482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.771709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.771745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.772001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.772031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.772387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.772417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.772796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.772824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.773177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.773206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.773577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.773607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.773965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.773994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.774366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.774395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.774756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.774784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.775026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.775057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.775427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.775455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.775807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.775837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.776238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.776267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.776639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.776669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.777020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-10-16 07:12:30.777051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-10-16 07:12:30.777282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.777310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.777674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.777702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.777891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.777920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.778258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.778286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.778662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.778690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.779074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.779105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.779363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.779395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.779745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.779775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.780160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.780189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.780466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.780495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.780865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.780895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.781256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.781285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.781547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.781576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.781950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.781980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.782308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.782336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.782692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.782720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.782961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.782990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.783377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.783405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.783764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.783793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.784047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.784076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.784441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.784470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.784834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.784874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-10-16 07:12:30.785234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-10-16 07:12:30.785263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.591 [2024-10-16 07:12:30.785615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.591 [2024-10-16 07:12:30.785647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.591 qpair failed and we were unable to recover it. 00:29:31.591 [2024-10-16 07:12:30.785867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.591 [2024-10-16 07:12:30.785897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.591 qpair failed and we were unable to recover it. 00:29:31.591 [2024-10-16 07:12:30.786312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.591 [2024-10-16 07:12:30.786347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.591 qpair failed and we were unable to recover it. 00:29:31.591 [2024-10-16 07:12:30.786691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.591 [2024-10-16 07:12:30.786720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.591 qpair failed and we were unable to recover it. 00:29:31.591 [2024-10-16 07:12:30.787184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.591 [2024-10-16 07:12:30.787213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.591 qpair failed and we were unable to recover it. 00:29:31.591 [2024-10-16 07:12:30.787569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.591 [2024-10-16 07:12:30.787598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.591 qpair failed and we were unable to recover it. 00:29:31.591 [2024-10-16 07:12:30.787960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.591 [2024-10-16 07:12:30.787989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.591 qpair failed and we were unable to recover it. 00:29:31.591 [2024-10-16 07:12:30.788090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.591 [2024-10-16 07:12:30.788119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64d0000b90 with addr=10.0.0.2, port=4420 00:29:31.591 qpair failed and we were unable to recover it. 00:29:31.591 [2024-10-16 07:12:30.788392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1708ed0 is same with the state(6) to be set 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Write completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 Read completed with error (sct=0, sc=8) 00:29:31.591 starting I/O failed 00:29:31.591 [2024-10-16 07:12:30.789437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.592 [2024-10-16 07:12:30.790130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.790263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.790589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.790626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.791153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.791260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.791659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.791696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.792183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.792289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.792716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.792753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.792942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.792974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.793298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.793328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.793743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.793773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.794141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.794173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.794548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.794577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.794927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.794956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.795334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.795365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.795716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.795744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.795977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.796008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.796399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.796428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.796637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.796666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.796938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.796968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.797333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.797364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.797686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.797714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.797945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.797974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.798365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.798394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.798728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.798759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.799103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.799133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.799496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.799524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.799882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.799912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.800255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.800283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.800679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.800708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.800994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.801024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.801401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.801429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.801684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.801712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.801971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.802001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.802398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.802428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.802753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.802782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.803023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.803053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.803460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.803490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.803737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.803766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.804144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.804174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.804539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.592 [2024-10-16 07:12:30.804569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.592 qpair failed and we were unable to recover it. 00:29:31.592 [2024-10-16 07:12:30.804959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.804989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.805360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.805395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.805751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.805780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.806038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.806067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.806407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.806436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.806793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.806822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.807179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.807211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.807576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.807605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.807954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.807985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.808309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.808337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.808583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.808612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.808950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.808980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.809308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.809336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.809706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.809736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.809965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.809995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.810384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.810413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.810786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.810815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.811171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.811200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.811593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.811622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.811983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.812013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.812372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.812401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.812773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.812801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.813176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.813206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.813586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.813615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.813974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.814003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.814380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.814409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.814813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.814841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.815084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.815112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.815369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.815399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.815651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.815681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.816067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.816098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.593 qpair failed and we were unable to recover it. 00:29:31.593 [2024-10-16 07:12:30.816438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.593 [2024-10-16 07:12:30.816467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.816840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.816879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.817213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.817242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.817609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.817637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.817991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.818021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.818251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.818279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.818609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.818638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.819001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.819031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.819394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.819423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.819643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.819671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.820033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.820068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.820445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.820474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.820685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.820714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.820931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.820960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.821347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.821376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.821537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.821566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.821859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.821889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.822233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.822262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.822631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.822661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.822907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.822939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.823069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.823104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.823449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.823478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.823879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.823911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.824254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.824285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.824502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.824533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.824916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.824947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.825309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.825338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.825698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.825727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.826126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.826157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.826369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.826397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.826736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.826765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.827130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.827162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.827496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.827526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.827750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.827779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.828166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.828196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.828644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.828675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.829025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.829058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.829283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.829314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.829647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.829678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.594 [2024-10-16 07:12:30.829942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.594 [2024-10-16 07:12:30.829972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.594 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.830237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.830269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.830479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.830507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.830877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.830908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.831284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.831313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.831675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.831706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.832073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.832103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.832471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.832501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.832869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.832899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.833236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.833265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.833632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.833662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.834037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.834073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.834436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.834465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.834817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.834859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.835224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.835253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.835475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.835503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.835888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.835921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.836247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.836277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.836678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.836707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.836926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.836958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.837280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.837309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.837677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.837705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.838045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.838077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.838455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.838486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.838853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.838897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.839191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.839223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.839446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.839476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.839857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.839887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.840211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.840242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.840490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.840520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.840914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.840944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.841329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.841358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.841756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.841786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.842019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.595 [2024-10-16 07:12:30.842049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.595 qpair failed and we were unable to recover it. 00:29:31.595 [2024-10-16 07:12:30.842420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.842450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.842672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.842700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.843160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.843190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.843434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.843466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.843835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.843891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.844239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.844270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.844492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.844522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.844977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.845009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.845381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.845411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.845780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.845809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.846201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.846232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.846608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.846637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.846861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.846891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.847263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.847291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.847512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.847541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.847924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.847955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.848117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.848150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.848526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.848562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.848930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.848961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.849307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.849340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.849560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.849589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.849956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.849988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.850331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.850360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.850709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.850746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.851099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.851129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.851472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.851502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.851726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.851756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.852135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.852166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.852510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.852541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.852927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.852959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.596 [2024-10-16 07:12:30.853091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.596 [2024-10-16 07:12:30.853123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.596 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.853381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.853410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.853662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.853693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.854086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.854116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.854487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.854517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.854913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.854945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.855159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.855187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.855431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.855459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.855812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.855840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.856212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.856243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.856602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.856632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.857003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.857034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.857384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.857412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.857788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.857817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.858211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.858242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.858622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.858650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.859012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.859044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.859416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.859445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.859665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.859695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.860079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.860110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.860480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.860511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.860858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.860891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.861249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.861280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.861542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.861575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.861947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.861980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.862350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.862380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.862758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.862787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.863192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.863230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.863589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.863620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.863984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.864012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.864442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.864470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.864782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.864813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.865180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.865214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.865421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.865449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.865823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.865864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.866245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.866276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.866640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.866668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.867028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.867060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.867451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.867481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.867722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.597 [2024-10-16 07:12:30.867751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.597 qpair failed and we were unable to recover it. 00:29:31.597 [2024-10-16 07:12:30.868128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.868157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.868509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.868538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.868908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.868938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.869128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.869159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.869533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.869564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.869787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.869815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.870208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.870238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.870602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.870632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.870905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.870937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.871302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.871333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.871558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.871586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.871926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.871956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.872289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.872319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.872677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.872707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.872967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.872998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.873361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.873390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.873738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.873767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.874117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.874147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.874507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.874539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.874870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.874901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.875177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.875206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.875564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.875592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.875960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.875992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.876333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.876363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.876615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.876646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.876893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.876923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.877174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.877203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.877438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.877474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.877864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.877894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.878278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.878308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.878410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.878437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.878760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.878798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.879109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.879140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.879482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.879512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.879896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.879927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.880152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.880182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.880411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.880449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.880793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.880821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.881074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.881105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.881484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.598 [2024-10-16 07:12:30.881514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.598 qpair failed and we were unable to recover it. 00:29:31.598 [2024-10-16 07:12:30.881898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.881928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.882320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.882351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.882597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.882627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.882972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.883001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.883369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.883399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.883754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.883785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.883940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.883971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.884278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.884308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.884670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.884699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.885088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.885121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.885478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.885507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.885892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.885922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.886167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.886197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.886584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.886614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.886955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.886991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.887336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.887365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.887726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.887756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.888107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.888139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.888391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.888421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.888763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.888792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.889180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.889211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.889568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.889599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.889973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.890005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.890384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.890415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.890664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.890694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.891062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.891093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.891451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.891481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.891742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.891776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.892157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.892189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.892553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.892582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.892977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.893009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.893379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.893409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.893778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.893808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.894198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.894229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.894581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.894612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.894997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.895030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.895288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.895319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.895678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.895707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.896070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.896101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.896463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.599 [2024-10-16 07:12:30.896494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.599 qpair failed and we were unable to recover it. 00:29:31.599 [2024-10-16 07:12:30.896727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.896755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.897009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.897041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.897398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.897427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.897744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.897775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.898035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.898065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.898341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.898370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.898747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.898777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.899137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.899169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.899531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.899562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.899949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.899981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.900210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.900240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.900604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.900634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.901005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.901035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.901403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.901431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.901784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.901820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.902200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.902232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.902591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.902619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.902982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.903014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.903274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.903308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.903662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.903693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.904087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.904118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.904367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.904395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.904762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.904792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.905157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.905187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.905557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.905585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.905947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.905978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.906359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.906389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.906759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.906789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.907173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.907204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.907593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.907622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.907967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.907999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.908232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.908265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.908637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.908668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.908902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.908934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.909160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.909190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.600 qpair failed and we were unable to recover it. 00:29:31.600 [2024-10-16 07:12:30.909584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.600 [2024-10-16 07:12:30.909613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.909994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.910026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.910301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.910330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.910679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.910708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.911077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.911107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.911477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.911505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.911903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.911935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.912261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.912291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.912664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.912694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.913057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.913087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.913453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.913483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.913876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.913905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.914261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.914290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.914648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.914677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.915054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.915084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.915448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.915478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.915866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.915896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.916296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.916324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.916537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.916565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.916858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.916896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.917101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.917129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.917391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.917419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.917771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.917800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.918177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.918208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.918574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.918603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.918966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.918996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.919385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.919413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.919562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.919589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.919974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.920004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.920374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.920403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.920771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.920799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.921177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.921206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.921415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.921444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.921708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.921737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.922031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.922060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.922295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.922326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.922673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.922702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.923057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.923088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.923352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.923380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.923755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.923783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.601 qpair failed and we were unable to recover it. 00:29:31.601 [2024-10-16 07:12:30.924188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.601 [2024-10-16 07:12:30.924218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.924568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.924597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.924934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.924964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.925202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.925231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.925614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.925642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.926007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.926037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.926409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.926438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.926788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.926817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.927176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.927205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.927661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.927689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.928057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.928089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.928335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.928363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.928719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.928747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.929135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.929164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.929516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.929545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.929819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.929853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.930239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.930267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.930485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.930514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.930870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.930899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.931158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.931192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.931436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.931465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.931822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.931860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.932252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.932282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.932544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.932573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.932918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.932948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.933321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.933352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.933691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.933720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.934071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.934100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.934467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.934495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.934868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.934897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.935228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.935257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.935623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.935652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.935965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.935996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.936352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.936381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.936747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.936776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.937132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.937164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.937530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.937558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.937829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.937866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.938255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.938283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.938632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.938661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.602 [2024-10-16 07:12:30.939003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.602 [2024-10-16 07:12:30.939033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.602 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.939397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.939425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.939769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.939797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.940168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.940197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.940558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.940586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.940963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.940993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.941335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.941364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.941720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.941749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.942018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.942048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.942297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.942328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.942677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.942707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.943094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.943123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.943491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.943519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.943940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.943969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.944226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.944253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.944578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.944606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.944977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.945007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.945386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.945413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.945643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.945672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.946022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.946059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.946438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.946466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.946840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.946894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.947245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.947274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.947636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.947664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.948048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.948078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.948295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.948324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.948665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.948693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.949044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.949075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.949331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.949360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.949735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.949765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.950138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.950167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.950398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.950425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.950565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.950594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.950969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.950999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.951247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.951275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.951628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.951656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.952070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.952100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.952456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.952485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.952863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.952893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.953120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.953149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.603 [2024-10-16 07:12:30.953383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.603 [2024-10-16 07:12:30.953411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.603 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.953775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.953804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.954030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.954060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.954404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.954431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.954814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.954842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.955078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.955110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.955340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.955372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.955765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.955795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.956138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.956168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.956525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.956553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.956782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.956810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.957110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.957140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.957499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.957527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.957898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.957929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.958247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.958277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.958644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.958673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.958799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.958826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.959197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.959226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.959599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.959628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.959880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.959915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.960295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.960325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.960551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.960584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.960992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.961023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.961252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.961280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.961655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.961683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.962014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.962044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.962438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.962467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.962828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.962864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.963241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.963269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.963488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.963516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.963801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.963829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.964197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.964227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.964545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.964574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.964830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.964881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.965149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.965177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.604 [2024-10-16 07:12:30.965572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.604 [2024-10-16 07:12:30.965600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.604 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.965963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.965993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.966273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.966301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.966689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.966718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.967105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.967136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.967252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.967282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.967674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.967702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.968096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.968126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.968501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.968537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.968906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.968935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.969266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.969294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.969673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.969702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.970078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.970109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.970472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.970500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.970865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.970894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.971244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.971272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.971642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.971670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.971917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.971946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.972285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.972315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.972557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.972585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.972947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.972978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.973190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.973219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.973444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.973473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.973812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.973840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.974239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.974274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.974644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.974672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.975048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.975078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.975443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.975473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.975860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.975891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.976172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.976200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.976577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.976605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.976785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.976816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.977216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.977246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.977618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.977646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.605 [2024-10-16 07:12:30.978008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.605 [2024-10-16 07:12:30.978037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.605 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.978276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.978305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.978677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.978706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.978981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.979010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.979278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.979307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.979643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.979671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.980056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.980087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.980460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.980489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.980830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.980866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.981136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.981168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.981461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.981490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.981815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.981852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.982205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.982234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.982607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.982635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.982989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.983019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.983377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.983405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.983766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.983795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.984051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.984080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.984457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.984485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.984866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.984896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.985281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.985311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.985534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.985563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.985811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.985840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.986210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.986239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.986594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.986623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.986987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.987017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.987385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.987414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.987778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.987808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.988161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.988190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.988571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.988600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.988961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.988997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.989225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.989254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.989491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.989522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.989869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.989898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.990291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.990319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.990690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.606 [2024-10-16 07:12:30.990718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.606 qpair failed and we were unable to recover it. 00:29:31.606 [2024-10-16 07:12:30.991082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.991113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.991533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.991561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.991914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.991944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.992322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.992359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.992722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.992751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.993002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.993031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.993484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.993513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.993874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.993904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.994241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.994270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.994607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.994635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.994994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.995026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.995399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.995427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.995679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.995710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.996069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.996100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.996328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.996357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.996714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.996743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.997094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.997124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.997381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.997413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.997775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.997804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.998178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.998208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.998473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.998500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.998910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.998942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.999312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.999340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:30.999726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:30.999754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.000104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.000135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.000520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.000550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.000767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.000795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.001184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.001214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.001426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.001454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.001706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.001736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.002094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.002124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.002496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.002525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.002884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.002916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.003128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.003156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.003527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.003562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.003926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.003956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.004329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.004357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.004697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.004726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.005203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.005232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.005590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.607 [2024-10-16 07:12:31.005618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.607 qpair failed and we were unable to recover it. 00:29:31.607 [2024-10-16 07:12:31.005998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.006027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.006234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.006262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.006630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.006659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.007025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.007055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.007423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.007453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.007700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.007728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.007968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.007998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.008363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.008391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.008772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.008801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.009201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.009231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.009576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.009606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.009992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.010022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.010379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.010408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.010793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.010820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.011196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.011225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.011466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.011494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.011898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.011928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.012311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.012340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.012704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.012734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.012965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.012994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.013357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.013387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.013744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.013774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.013989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.014019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.014390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.014418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.014787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.014817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.015049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.015079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.015297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.015325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.015696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.015726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.016062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.016092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.016194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.016222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.016590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.016618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.016968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.016998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.017358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.017387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.017741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.017771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.018150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.608 [2024-10-16 07:12:31.018185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.608 qpair failed and we were unable to recover it. 00:29:31.608 [2024-10-16 07:12:31.018537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.018567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.018810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.018839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.019049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.019078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.019435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.019464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.019841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.019880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.020231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.020261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.020617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.020646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.020872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.020902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.021154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.021183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.021419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.021448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.021808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.021837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.022170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.022200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.022562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.022591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.022966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.022996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.023269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.023298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.023695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.023723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.024076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.024108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.024325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.024354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.024718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.024748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.024967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.024997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.025360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.025389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.025601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.025629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.026006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.026036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.026395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.026425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.026806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.026834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.027069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.027098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.027439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.027468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.027872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.027902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.028255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.028285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.028665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.028694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.029119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.029149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.029507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.029535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.029906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.029936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.030167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.030195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.609 [2024-10-16 07:12:31.030536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.609 [2024-10-16 07:12:31.030565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.609 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.030918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.030948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.031304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.031332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.031685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.031714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.032112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.032142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.032498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.032534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.032899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.032930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.033339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.033368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.033612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.033641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.034024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.034055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.034418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.034447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.034814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.034851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.035215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.035244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.035632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.035661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.035920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.035949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.036242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.036271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.036657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.036686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.037052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.037082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.037355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.037383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.037631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.037660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.038042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.038072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.038440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.038469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.038697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.038727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.039096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.039126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.039466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.039497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.039868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.039898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.040289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.040317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.040676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.040705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.040924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.040954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.041323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.041353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.041602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.041635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.041858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.041889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.042246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.042275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.042628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.042657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.043014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.043045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.043416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.043445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.043811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.610 [2024-10-16 07:12:31.043840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.610 qpair failed and we were unable to recover it. 00:29:31.610 [2024-10-16 07:12:31.044296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.044325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.044690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.044720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.045076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.045106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.045336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.045365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.045712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.045742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.046099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.046128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.046499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.046529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.046761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.046791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.047246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.047282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.047531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.047560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.047919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.047948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.048334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.048364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.048733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.048762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.049111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.049142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.049519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.049548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.049884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.049915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.050253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.050281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.050654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.050683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.050948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.050978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.051198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.051228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.051595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.051623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.051981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.052011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.052379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.052408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.052767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.052795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.053071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.053102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.053523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.053552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.053759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.053787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.054045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.054077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.054443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.054471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.054859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.054888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.055116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.055144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.055527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.055556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.055926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.055956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.056280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.056308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.056681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.056710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.057077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.057107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.057473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.057501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.057853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.057883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.058252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.611 [2024-10-16 07:12:31.058280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-10-16 07:12:31.058489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.058517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.058896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.058926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.059192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.059220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.059574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.059602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.059959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.059991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.060362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.060390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.060755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.060784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.061163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.061192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.061587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.061616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.061975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.062011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.062252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.062280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.062534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.062564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.062929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.062958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.063311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.063341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.063689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.063718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.064064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.064094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.064452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.064480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.064737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.064765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.065152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.065182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.065531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.065561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.065917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.065947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.066331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.066359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.066705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.066734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.067125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.067155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.067522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.067551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.067911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.067942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.068291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.068320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.068690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.068718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.069172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.069202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.069572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.069600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.069842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.069880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.070107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.070135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.070395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.070429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.070775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.070806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.071148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.071178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.071533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.071563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.071923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.071955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.072378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.072407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.072655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.072684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.073095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.612 [2024-10-16 07:12:31.073127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-10-16 07:12:31.073469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.073498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.073762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.073793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.074015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.074045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.074426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.074455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.074824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.074864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.075198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.075227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.075590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.075619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.075851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.075883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.076104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.076132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.076520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.076548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.076888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.076920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.077149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.077182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.077546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.077575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.077925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.077956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.078326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.078354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.078716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.078745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-10-16 07:12:31.079100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.613 [2024-10-16 07:12:31.079132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.079363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.079395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.079756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.079787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.080234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.080264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.080634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.080664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.081032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.081063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.081420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.081449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.081838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.081879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.082122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.082150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.082407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.082440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.082682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.082710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.083087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.083117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.083499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.083528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.083867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.083897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.084260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.084288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.084674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.084707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.085066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.085100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.085451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.085481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.085836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.085874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.086137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.086166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.086517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.086552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.086939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.086970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.087299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.087328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.087701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.087730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.088064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.088095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.088466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.088496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.088858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.088891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.089291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.089321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.089692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.089721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.089968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.089999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.090378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.090406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.090637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.090667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.090908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.090938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.091287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.091315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.091625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.091656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.092003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.092034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.092290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.092318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.092656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.092685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.093033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.093063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.093443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.093474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.093718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.093747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.093995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.094027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.094364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.094395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.094770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.094799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.095143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.095172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.921 qpair failed and we were unable to recover it. 00:29:31.921 [2024-10-16 07:12:31.095566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.921 [2024-10-16 07:12:31.095596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.095967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.095997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.096337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.096367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.096590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.096620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.096896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.096925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.097303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.097332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.097703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.097733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.098110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.098139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.098397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.098428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.098644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.098674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.098922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.098954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.099323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.099353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.099647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.099677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.100041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.100072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.100458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.100488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.100886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.100925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.101285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.101316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.101680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.101709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.102083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.102113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.102459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.102487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.102872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.102903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.103259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.103289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.103517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.103546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.103887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.103925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.104195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.104225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.104457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.104486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.104757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.104788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.105160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.105190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.105435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.105464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.105824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.922 [2024-10-16 07:12:31.105866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.922 qpair failed and we were unable to recover it. 00:29:31.922 [2024-10-16 07:12:31.106232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.106263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.106615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.106644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.107049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.107081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.107441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.107471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.107841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.107880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.108250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.108279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.108614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.108642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.108916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.108947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.109202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.109232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.109617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.109648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.109869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.109900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.110358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.110387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.110751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.110780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.111028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.111060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.111414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.111444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.111690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.111720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.111994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.112023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.112289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.112318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.112689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.112719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.113090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.113121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.113512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.113542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.113767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.113796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.114177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.114207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.114554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.114585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.114970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.115000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.115378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.115414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.115646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.115675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.115889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.115918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.116302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.116333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.116577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.116606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.116960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.116991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.117367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.117396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.117774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.117805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.118186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.118215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.118555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.118584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.118972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.119003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.119383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.119413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.119647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.119676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.923 [2024-10-16 07:12:31.119827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.923 [2024-10-16 07:12:31.119866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.923 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.120251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.120282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.120636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.120665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.121074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.121106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.121452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.121481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.121851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.121883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.122112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.122141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.122510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.122539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.122925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.122956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.123307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.123336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.123717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.123748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.123994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.124025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.124364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.124395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.124774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.124803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.125178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.125210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.125566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.125595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.125964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.125996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.126384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.126414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.126782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.126811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.127171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.127202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.127542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.127571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.127932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.127962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.128177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.128205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.128464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.128494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.128856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.128886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.129288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.129317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.129671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.129702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.130048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.130083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.130457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.130485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.130841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.130878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.131272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.924 [2024-10-16 07:12:31.131301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.924 qpair failed and we were unable to recover it. 00:29:31.924 [2024-10-16 07:12:31.131593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.131620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.131854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.131884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.132245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.132274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.132631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.132660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.133004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.133035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.133385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.133414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.133649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.133677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.134056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.134086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.134436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.134463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.134691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.134719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.134953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.134983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.135345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.135374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.135612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.135643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.135998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.136029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.136412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.136441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.136941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.136971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.137321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.137351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.137767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.137795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.138141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.138171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.138551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.138580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.138972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.139001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.139368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.139397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.139637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.139669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.140077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.140108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.140327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.140358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.925 [2024-10-16 07:12:31.140686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.925 [2024-10-16 07:12:31.140716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.925 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.141060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.141091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.141465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.141493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.141814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.141867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.142248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.142278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.142630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.142659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.142882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.142912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.143132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.143161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.143502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.143531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.143750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.143779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.144213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.144244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.144595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.144636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.145016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.145047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.145414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.145443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.145824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.145861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.146087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.146116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.146447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.146477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.146841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.146893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.147268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.147298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.147671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.147700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.147975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.148004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.148419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.148447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.148703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.148731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.149083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.149115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.149300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.149328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.149712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.149741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.150150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.150181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.150436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.150463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.150785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.150814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.151054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.151084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.151489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.151518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.151657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.151686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.152038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.926 [2024-10-16 07:12:31.152069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.926 qpair failed and we were unable to recover it. 00:29:31.926 [2024-10-16 07:12:31.152410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.152439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.152788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.152818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.153161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.153193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.153551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.153581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.153937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.153969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.154351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.154381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.154593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.154622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.155004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.155034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.155419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.155448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.155828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.155866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.156115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.156144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.156500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.156530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.156942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.156972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.157188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.157216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.157554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.157582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.157918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.157950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.158332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.158361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.158742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.158771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.159180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.159217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.159576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.159607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.159988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.160019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.160391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.160419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.160783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.160811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.161001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.161034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.161401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.161431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.161812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.161841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.162102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.162134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.162382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.162410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.162780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.162809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.163173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.163204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.163423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.163452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.163818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.163856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.164231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.164260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.164619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.164650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.165019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.165049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.165417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.165445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.165824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.165860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.166228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.166256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.166486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.166514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.166891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.166920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.167179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.167211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.167576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.167605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.167985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.168016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.168378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.168408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.168768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.168797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.927 qpair failed and we were unable to recover it. 00:29:31.927 [2024-10-16 07:12:31.169067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-10-16 07:12:31.169097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.169482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.169511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.169720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.169751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.170108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.170140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.170353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.170383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.170748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.170777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.170962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.170995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.171212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.171242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.171600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.171629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.171967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.171998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.172372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.172400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.172756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.172786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.173154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.173184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.173531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.173568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.173959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.173990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.174338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.174368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.174732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.174761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.175140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.175170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.175523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.175554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.175927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.175958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.176230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.176258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.176602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.176630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.177002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.177032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.177382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.177411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.177785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.177814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.178178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.178208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.178462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.178493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.178746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.178777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.179001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.179032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.179415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.179444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.179815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.179852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.180213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-10-16 07:12:31.180241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.928 qpair failed and we were unable to recover it. 00:29:31.928 [2024-10-16 07:12:31.180599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.180629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.180862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.180895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.181101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.181129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.181489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.181518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.181885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.181914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.182158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.182186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.182495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.182524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.182898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.182929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.183268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.183298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.183682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.183711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.184074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.184104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.184447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.184475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.184840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.184877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.185055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.185083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.185323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.185352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.185596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.185624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.185850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.185880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.186236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.186265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.186622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.186651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.187023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.187053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.187408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.187437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.187775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.187811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.188182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.188211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.188309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.188336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.188666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.188695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.189068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.189098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.189333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.189361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.189606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.189638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.189995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.190025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.190482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.190510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.190734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.190763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.191151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.191181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.191541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.191571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.191930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.191960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.192328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.192359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.192718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.192747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.193001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.193032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.193377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.193407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.193650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.193679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.194076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.194108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.194483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.194513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.194873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.194904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.195227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.195258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.195518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.195546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.195888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.195919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.196251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.196280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.196629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.196657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.197031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.197061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.197305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.197335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.197675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.197704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.198029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.198059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.198414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.198443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.198758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.198785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.199144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.199180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.199403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.199432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.199679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.199708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.199973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.200002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.200383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-10-16 07:12:31.200412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.929 qpair failed and we were unable to recover it. 00:29:31.929 [2024-10-16 07:12:31.200755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.200784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.201023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.201051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.201416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.201444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.201818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.201862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.202151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.202180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.202554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.202583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.202951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.202982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.203350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.203378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.203728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.203758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.204101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.204132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.204501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.204530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.204828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.204864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.205089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.205120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.205463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.205493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.205815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.205851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.206170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.206200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.206567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.206596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.206957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.206988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.207353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.207383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.207750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.207778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.208052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.208082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.208428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.208458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.208713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.208741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.209097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.209127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.209485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.209515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.209872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.209903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.210232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.210260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.210624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.210653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.211036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.211067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.211297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.211325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.211697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.211727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.212089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.212120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.212484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.212512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.212877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.212907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.213153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.213181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.213405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.213437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.213770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.213799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.214210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.214240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.214620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.214649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.215014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.215045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.215416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.215444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.215811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.215839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.216079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.216108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.216451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.216491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.216873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.216902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.217265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.217294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.217666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.217694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.218048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.218078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.218444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.218473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.218666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.218695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.219055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.219086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.219445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.219474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.219714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.219747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.220123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.220154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.220372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.220400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.220632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.220661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.930 [2024-10-16 07:12:31.221030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.930 [2024-10-16 07:12:31.221060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.930 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.221444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.221473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.221744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.221772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.221993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.222023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.222262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.222290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.222674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.222702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.222964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.222994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.223338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.223367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.223707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.223737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.224111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.224140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.224387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.224415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.224797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.224826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.225189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.225218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.225659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.225688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.226047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.226078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.226315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.226344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.226732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.226760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.227139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.227169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.227523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.227553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.227927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.227957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.228191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.228219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.228593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.228622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.228983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.229012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.229382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.229410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.229761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.229791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.230020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.230050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.230393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.230421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.230802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.230837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.231221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.231257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.231642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.231671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.232009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.232040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.232276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.232304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.232665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.232695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.233065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.233094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.233459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.233488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.233860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.233890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.234239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.234267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.234505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.234536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.234772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.234801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.235034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.235064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.235419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.235448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.235818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.235856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.236082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.236110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.236464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.236492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.236873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.236903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.237154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.237183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.237548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.237578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.237798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.237825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.238199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.238230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.238590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.238619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.238996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.239025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.239347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.239378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.239723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.239753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.240132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.240160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.240518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.240548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.240911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.240942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.241317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.241346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.241566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.241595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.242010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.242040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.242407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.242436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.242671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.242700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.243095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.243125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.243482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.243511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.243889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.243920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.244247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.244277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.244502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.244534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.244910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.244941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.245306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.245342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.245700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.245731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.931 qpair failed and we were unable to recover it. 00:29:31.931 [2024-10-16 07:12:31.246099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.931 [2024-10-16 07:12:31.246138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.246506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.246535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.246898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.246930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.247279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.247309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.247674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.247704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.248071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.248102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.248467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.248495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.248864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.248895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.249316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.249345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.249712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.249741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.250142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.250174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.250527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.250557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.250918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.250952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.251322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.251352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.251699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.251729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.252071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.252104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.252328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.252357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.252765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.252795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.253165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.253197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.253543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.253573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.253924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.253954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.254325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.254357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.254724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.254754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.255155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.255185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.255511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.255541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.255902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.255934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.256310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.256339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.256572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.256604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.256986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.257018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.257382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.257412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.257768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.257797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.258190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.258222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.258451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.258480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.258837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.258893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.259302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.259334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.259677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.259709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.259917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.259950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.260285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.260315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.260533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.260561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.260968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.261000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.261348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.261379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.261721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.261754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.262095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.262126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.932 [2024-10-16 07:12:31.262464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.932 [2024-10-16 07:12:31.262496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.932 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.262828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.262868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.263213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.263243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.263609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.263639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.263992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.264023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.264232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.264261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.264641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.264670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.265023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.265054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.265412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.265441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.265808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.265839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.266206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.266239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.266601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.266630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.266964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.266996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.267369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.267399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.267776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.267805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.268193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.268224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.268592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.268621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.268992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.269023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.269377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.269405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.269644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.269673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.270014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.270046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.270428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.270457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.270828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.270884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.271225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.271254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.271496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.271528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.271913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.271945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.272189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.272221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.272481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.272511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.272874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.272905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.273113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.273143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.273493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.273523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.273905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.273936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.274268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.274297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.274671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.274701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.274936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.274969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.275272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.275311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.275665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.275697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.275903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.275933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.276303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.276333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.276683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.276714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.277047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.933 [2024-10-16 07:12:31.277077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.933 qpair failed and we were unable to recover it. 00:29:31.933 [2024-10-16 07:12:31.277468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.277497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.277863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.277894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.278247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.278277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.278653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.278682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.278888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.278919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.279309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.279339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.279698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.279728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.279963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.279996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.280349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.280381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.280737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.280767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.281147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.281179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.281549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.281579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.281971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.282001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.282370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.282401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.282624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.282653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.283032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.283062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.283420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.283450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.283804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.283834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.284205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.284235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.284580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.284611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.284958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.284989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.285233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.285267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.285625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.285655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.285919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.285949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.286298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.286328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.286688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.286718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.287089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.287122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.287333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.287362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.287689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.287719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.288073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.288103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.288494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.288524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.288872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.288906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.289278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.289308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.289695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.289725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.290080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.290109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.290486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.290516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.290881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.290912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.291292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.291323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.291696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.291725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.291954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.291983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.292341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.292372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.292716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.292746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.293114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.293144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.293509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.293539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.293904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.293937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.294266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.294294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.294565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.934 [2024-10-16 07:12:31.294593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.934 qpair failed and we were unable to recover it. 00:29:31.934 [2024-10-16 07:12:31.294960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.294990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.295362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.295391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.295747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.295777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.296154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.296184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.296539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.296570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.296779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.296807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.297185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.297216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.297439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.297469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.297854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.297884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.298193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.298223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.298573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.298602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.298958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.298988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.299365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.299393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.299692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.299723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.300046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.300083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.300423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.300452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.300683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.300711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.300967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.300996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.301219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.301247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.301566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.301596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.301708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.301740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.302103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.302134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.302492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.302520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.302739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.302771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.303115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.303146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.303515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.303543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.303874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.303907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.304244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.304275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.304534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.304564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.304911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.304941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.305311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.305340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.305699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.305729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.306095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.306124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.306489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.306519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.306724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.306752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.307086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.307117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.307482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.307511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.307877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.307907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.308229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.308261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.308637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.308667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.309049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.309080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.309437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.309469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.309837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.309875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.310241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.310270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.310670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.310697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.310930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.310960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.311211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.311240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.935 [2024-10-16 07:12:31.311500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.935 [2024-10-16 07:12:31.311528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.935 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.311666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.311694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.311940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.311972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.312329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.312359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.312738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.312767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.313158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.313187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.313546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.313575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.313926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.313962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.314336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.314365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.314722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.314752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.315125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.315156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.315530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.315563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.315947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.315977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.316315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.316345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.316710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.316738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.317093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.317124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.317498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.317526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.317886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.317916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.318319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.318348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.318734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.318762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.319130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.319163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.319545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.319574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.319945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.319975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.320379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.320408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.320768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.320797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.321162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.321195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.321438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.321470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.321858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.321889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.322319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.322347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.322695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.322727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.323098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.323129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.323501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.323530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.323901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.323932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.324196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.324227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.324453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.324482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.324866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.324899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.325246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.325276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.325508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.325538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.325879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.325910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.326289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.326318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.936 qpair failed and we were unable to recover it. 00:29:31.936 [2024-10-16 07:12:31.326685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.936 [2024-10-16 07:12:31.326714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.327023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.327055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.327416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.327447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.327803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.327833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.328086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.328116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.328460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.328491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.328865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.328896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.329161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.329199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.329589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.329619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.329814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.329854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.937 [2024-10-16 07:12:31.330234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:31.937 [2024-10-16 07:12:31.330264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:31.937 [2024-10-16 07:12:31.330644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.330672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.937 [2024-10-16 07:12:31.331039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.331069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.937 [2024-10-16 07:12:31.331441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.331471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.331878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.331911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.332279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.332310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.332681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.332713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.333092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.333123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.333486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.333516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.333874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.333906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.334155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.334183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.334515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.334548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.334799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.334832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.335276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.335308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.335677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.335708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.336118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.336159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.336503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.336533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.336904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.336936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.337296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.337326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.337711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.337741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.338010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.338039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.338324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.338353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.338726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.338756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.339096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.339130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.339467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.339497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.339717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.339747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.340117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.340150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.340501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.340535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.340875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.340907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.341261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.341292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.341640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.341671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.342020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.342052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.342402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.342433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.342655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.342683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.343022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.343053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.343428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.343463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.343832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.343881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.937 qpair failed and we were unable to recover it. 00:29:31.937 [2024-10-16 07:12:31.344232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.937 [2024-10-16 07:12:31.344263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.344472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.344504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.344868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.344903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.345264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.345294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.345649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.345679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.346059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.346090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.346287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.346317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.346716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.346746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.347105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.347134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.347485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.347517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.347913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.347944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.348317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.348348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.348739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.348769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.349132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.349162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.349502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.349533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.349864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.349895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.350276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.350307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.350655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.350685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.350915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.350945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.351327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.351356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.351576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.351604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.351978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.352009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.352373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.352404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.352784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.352815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.353210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.353242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.353612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.353641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.353919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.353954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.354305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.354335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.354574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.354603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.354858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.354887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.355285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.355315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.355674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.355706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.355933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.355964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.356178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.356209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.356566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.356597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.356980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.357012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.357242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.357273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.357660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.357691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.358058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.358097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.358450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.358483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.358853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.358885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.359272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.359304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.359664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.359695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.360029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.360062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.360425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.360455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.360807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.360837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.361063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.361093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.361478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.361509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.361856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.361888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.362241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.362270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.362641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.938 [2024-10-16 07:12:31.362671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.938 qpair failed and we were unable to recover it. 00:29:31.938 [2024-10-16 07:12:31.362910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.362942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.363344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.363374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.363725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.363757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.364107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.364140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.364511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.364543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.364923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.364955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.365291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.365323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.365679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.365713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.366006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.366036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.366275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.366304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.366694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.366725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.367000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.367033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.367410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.367442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.367790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.367824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.368237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.368269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.368624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.368654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.368895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.368926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.369165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.369194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.369543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.369573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.369940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.369972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.370352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.370383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.370735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.370765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.371149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.371183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.371554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.371583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.371944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.371978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.372155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.372185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.372545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.372578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.372921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.372957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.373177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.373205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.939 [2024-10-16 07:12:31.373575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.373607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.939 [2024-10-16 07:12:31.374015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.374048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.939 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.939 [2024-10-16 07:12:31.374409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.374440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.374822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.374863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.375214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.375245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.375587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.375617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.375988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.376019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.376383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.376413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.376802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.376833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.377224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.377256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.377595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.377628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.377990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.378021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.378383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.378413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.378662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.378693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.378928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.378960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.379425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.939 [2024-10-16 07:12:31.379456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:31.939 qpair failed and we were unable to recover it. 00:29:31.939 [2024-10-16 07:12:31.379799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.379830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.380214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.380247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.380605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.380633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.380990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.381021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.381429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.381459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.381687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.381714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.382086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.382116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.382483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.382514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.382876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.382907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.383306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.383335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.383577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.383604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.384019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.384050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.384269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.384297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.384686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.384715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.384938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.384969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.385219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.205 [2024-10-16 07:12:31.385247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.205 qpair failed and we were unable to recover it. 00:29:32.205 [2024-10-16 07:12:31.385643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.385672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.386045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.386075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.386287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.386315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.386667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.386696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.387078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.387114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.387476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.387505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.387889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.387918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.388277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.388307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.388639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.388668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.389023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.389053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.389429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.389457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.389829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.389866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.390083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.390112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.390369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.390400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.390637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.390667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.391069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.391101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.391432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.391461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.391714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.391743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.391997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.392028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.392403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.392433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.392800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.392830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.393223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.393252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.393578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.393608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.394051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.394081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.394337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.394368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.394704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.394733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.395104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.395134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.395395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.395423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.395737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.395766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.396009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.396039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.396404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.396433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.396812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.396851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.397225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.397254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.397469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.397498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.397876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.397906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.398280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.398309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.398682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.398710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.399079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.399110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.399456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.399485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.399820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.206 [2024-10-16 07:12:31.399855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.206 qpair failed and we were unable to recover it. 00:29:32.206 [2024-10-16 07:12:31.400085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.400114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.400384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.400413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.400760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.400789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.401194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.401225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.401576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.401613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.402010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.402041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.402423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.402451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.402817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.402868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.403076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.403105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.403500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.403529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.403893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.403923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.404295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.404323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.404735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.404764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.405107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.405137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.405415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.405444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.405862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.405894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.406263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.406293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.406659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.406688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.407054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.407085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.407448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.407476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.407859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.407889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.408214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.408242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.408613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.408642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 Malloc0 00:29:32.207 [2024-10-16 07:12:31.409020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.409052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.409417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.409446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.409801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.207 [2024-10-16 07:12:31.409830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.410186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:32.207 [2024-10-16 07:12:31.410215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.207 [2024-10-16 07:12:31.410557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.410588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.207 [2024-10-16 07:12:31.410952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.410984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.411287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.411322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.411587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.411615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.411970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.412001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.412358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.412387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.412747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.412777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.413131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.413162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.413515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.413543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.413940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.413969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.414318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.414347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.207 qpair failed and we were unable to recover it. 00:29:32.207 [2024-10-16 07:12:31.414712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.207 [2024-10-16 07:12:31.414741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.415127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.415158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.415278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.415309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.415675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.415704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.416071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.416100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.416333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.208 [2024-10-16 07:12:31.416455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.416485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.416878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.416908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.417154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.417183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.417545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.417575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.417933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.417963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.418292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.418321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.418693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.418721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.419128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.419158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.419514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.419544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.419903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.419934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.420320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.420348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.420704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.420734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.421099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.421128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.421501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.421531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.421897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.421926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.422328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.422356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.422698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.422726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.422984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.423014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.423408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.423437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.423681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.423713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.424096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.424127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.424490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.424520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.424866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.424897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.425224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.425252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.208 [2024-10-16 07:12:31.425591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.425620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.208 [2024-10-16 07:12:31.426000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.426029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.208 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.208 [2024-10-16 07:12:31.426412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.426444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.426809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.426838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.427355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.427384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.427746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.427774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.428139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.428170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.428538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.428568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.428856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.428887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.208 [2024-10-16 07:12:31.429261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.208 [2024-10-16 07:12:31.429289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.208 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.429550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.429582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.429925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.429955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.430341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.430369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.430730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.430766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.431157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.431188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.431543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.431571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.431905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.431936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.432273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.432301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.432560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.432590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.432928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.432958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.433230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.433258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.433608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.433638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.433866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.433895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.434216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.434244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.434608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.434636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.435005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.435034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.435421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.435451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.435691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.435719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.436087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.436118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.436515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.436544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.436922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.436951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.437338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.437367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.209 [2024-10-16 07:12:31.437750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.437779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.209 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.209 [2024-10-16 07:12:31.438149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.438178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.438359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.438387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.438642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.438672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.438941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.438970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.439204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.439232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.439611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.439652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.439994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.440025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.440386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.440415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.440668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.440695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.209 [2024-10-16 07:12:31.441075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.209 [2024-10-16 07:12:31.441104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.209 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.441465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.441496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.441877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.441907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.442288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.442317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.442676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.442704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.442969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.442999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.443335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.443364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.443661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.443691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.443945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.443976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.444297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.444327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.444722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.444752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.445097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.445128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.445353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.445382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.445729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.445758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.446172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.446202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.446568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.446599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.446957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.446987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.447353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.447383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.447735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.447764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.448159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.448190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.448541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.448573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.448924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.448956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.449316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.449346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.210 [2024-10-16 07:12:31.449562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.449592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.210 [2024-10-16 07:12:31.449973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.450004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.210 [2024-10-16 07:12:31.450419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.210 [2024-10-16 07:12:31.450450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.450665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.450695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.451031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.451061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.451291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.451320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.451713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.451742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.452090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.452121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.452490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.452519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.452880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.452912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.453279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.453308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.453701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.453736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.454078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.454109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.454511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.454539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.454999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.455029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.455278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.210 [2024-10-16 07:12:31.455307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.210 qpair failed and we were unable to recover it. 00:29:32.210 [2024-10-16 07:12:31.455704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.211 [2024-10-16 07:12:31.455734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.456139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.211 [2024-10-16 07:12:31.456169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.456524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.211 [2024-10-16 07:12:31.456553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f64dc000b90 with addr=10.0.0.2, port=4420 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.456749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.211 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.211 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.211 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.211 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.211 [2024-10-16 07:12:31.467727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.467874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.467916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.467934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.467949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.467991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.211 07:12:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3313001 00:29:32.211 [2024-10-16 07:12:31.477531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.477675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.477709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.477725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.477740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.477774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.487501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.487579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.487606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.487619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.487629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.487654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.497555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.497640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.497659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.497667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.497674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.497692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.507509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.507590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.507609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.507616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.507623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.507640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.517410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.517473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.517498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.517506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.517513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.517530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.527502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.527568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.527587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.527595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.527601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.527619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.537552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.537624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.537642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.537650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.537657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.537674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.547578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.547646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.547666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.547674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.547681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.547698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.557608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.557676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.557695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.557703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.557709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.557732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.567628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.567688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.567707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.567715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.567722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.567739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.577645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.211 [2024-10-16 07:12:31.577718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.211 [2024-10-16 07:12:31.577736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.211 [2024-10-16 07:12:31.577743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.211 [2024-10-16 07:12:31.577750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.211 [2024-10-16 07:12:31.577767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.211 qpair failed and we were unable to recover it. 00:29:32.211 [2024-10-16 07:12:31.587708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.587786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.587805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.587815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.587822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.587840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.597572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.597636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.597656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.597665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.597672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.597689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.607772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.607839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.607869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.607878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.607884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.607901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.617739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.617862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.617880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.617888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.617894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.617911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.627867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.627947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.627965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.627973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.627979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.627996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.637822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.637894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.637913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.637920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.637927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.637943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.647820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.647892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.647910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.647918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.647924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.647945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.658003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.658083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.658101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.658108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.658115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.658132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.668017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.668085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.668104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.668112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.668118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.668136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.678016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.678086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.678105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.678112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.678120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.678136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.687932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.688004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.688023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.688031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.688038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.688054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.212 [2024-10-16 07:12:31.698025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.212 [2024-10-16 07:12:31.698104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.212 [2024-10-16 07:12:31.698123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.212 [2024-10-16 07:12:31.698132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.212 [2024-10-16 07:12:31.698139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.212 [2024-10-16 07:12:31.698156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.212 qpair failed and we were unable to recover it. 00:29:32.476 [2024-10-16 07:12:31.708138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.476 [2024-10-16 07:12:31.708229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.476 [2024-10-16 07:12:31.708247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.476 [2024-10-16 07:12:31.708255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.476 [2024-10-16 07:12:31.708262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.476 [2024-10-16 07:12:31.708278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.476 qpair failed and we were unable to recover it. 00:29:32.476 [2024-10-16 07:12:31.718095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.476 [2024-10-16 07:12:31.718154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.718172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.718180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.718187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.718203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.728120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.728187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.728212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.728221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.728227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.728245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.738164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.738235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.738256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.738264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.738277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.738295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.748223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.748300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.748319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.748327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.748334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.748350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.758194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.758259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.758277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.758285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.758291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.758307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.768201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.768281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.768299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.768306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.768313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.768330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.778258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.778328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.778347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.778354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.778361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.778378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.788276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.788352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.788371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.788379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.788385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.788402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.798280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.798347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.798366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.798373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.798380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.798396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.808218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.808291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.808310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.808317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.808324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.808340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.818377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.818448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.818466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.818474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.818480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.818497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.828440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.828517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.828536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.828549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-10-16 07:12:31.828555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.477 [2024-10-16 07:12:31.828572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-10-16 07:12:31.838450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-10-16 07:12:31.838518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-10-16 07:12:31.838538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-10-16 07:12:31.838545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.838552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.838569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.848480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.848556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.848575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.848583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.848589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.848606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.858507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.858576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.858594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.858602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.858608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.858625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.868414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.868485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.868503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.868512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.868518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.868534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.878508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.878577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.878595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.878603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.878609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.878626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.888631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.888705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.888743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.888753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.888761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.888786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.898600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.898683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.898705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.898713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.898720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.898739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.908652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.908728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.908749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.908757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.908763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.908782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.918647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.918723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.918743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.918759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.918766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.918784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.928690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.928751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.928770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.928778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.928784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.928801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.938699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.938772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.938791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.938799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.938806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.938823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.948720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.948785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.948804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.948812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.948818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.948835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.958757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.958867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.958886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.958894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.958900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.958917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-10-16 07:12:31.968786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-10-16 07:12:31.968859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-10-16 07:12:31.968878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-10-16 07:12:31.968886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-10-16 07:12:31.968893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.478 [2024-10-16 07:12:31.968909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:31.978804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:31.978883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:31.978902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:31.978910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:31.978916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:31.978933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:31.988765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:31.988839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:31.988862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:31.988870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:31.988877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:31.988894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:31.998909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:31.998981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:31.999000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:31.999008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:31.999015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:31.999032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:32.008926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:32.008991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:32.009016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:32.009023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:32.009030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:32.009046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:32.018964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:32.019038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:32.019056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:32.019064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:32.019070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:32.019086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:32.029030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:32.029095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:32.029113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:32.029121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:32.029127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:32.029143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:32.039025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:32.039094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:32.039111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:32.039119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:32.039125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:32.039141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:32.049022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:32.049082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:32.049102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:32.049109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:32.049116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:32.049138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:32.058976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:32.059074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:32.059093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:32.059102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:32.059108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:32.059125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:32.069165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:32.069243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:32.069261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:32.069269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:32.069275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.742 [2024-10-16 07:12:32.069292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-10-16 07:12:32.079144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-10-16 07:12:32.079208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-10-16 07:12:32.079227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-10-16 07:12:32.079234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-10-16 07:12:32.079241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.079257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.089171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.089241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.089261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.089270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.089277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.089293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.099287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.099383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.099412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.099419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.099426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.099442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.109297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.109371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.109389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.109396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.109403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.109419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.119256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.119344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.119362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.119369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.119376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.119392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.129343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.129408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.129425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.129433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.129440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.129456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.139329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.139447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.139465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.139472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.139479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.139501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.149388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.149476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.149494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.149502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.149508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.149525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.159392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.159505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.159524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.159532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.159538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.159555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.169408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.169478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.169497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.169505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.169511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.169528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.179457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.179564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.179592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.179601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.179607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.179627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.189479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.189575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.189618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.189628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.189635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.189659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.199496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.199559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.199580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.199588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.199595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.199613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.209524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.209583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.743 [2024-10-16 07:12:32.209602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.743 [2024-10-16 07:12:32.209610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.743 [2024-10-16 07:12:32.209617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.743 [2024-10-16 07:12:32.209634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.743 qpair failed and we were unable to recover it. 00:29:32.743 [2024-10-16 07:12:32.219571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.743 [2024-10-16 07:12:32.219641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.744 [2024-10-16 07:12:32.219659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.744 [2024-10-16 07:12:32.219667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.744 [2024-10-16 07:12:32.219673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.744 [2024-10-16 07:12:32.219690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.744 qpair failed and we were unable to recover it. 00:29:32.744 [2024-10-16 07:12:32.229623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.744 [2024-10-16 07:12:32.229704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.744 [2024-10-16 07:12:32.229722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.744 [2024-10-16 07:12:32.229730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.744 [2024-10-16 07:12:32.229742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.744 [2024-10-16 07:12:32.229759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.744 qpair failed and we were unable to recover it. 00:29:32.744 [2024-10-16 07:12:32.239603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.744 [2024-10-16 07:12:32.239672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.744 [2024-10-16 07:12:32.239691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.744 [2024-10-16 07:12:32.239698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.744 [2024-10-16 07:12:32.239705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:32.744 [2024-10-16 07:12:32.239721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.744 qpair failed and we were unable to recover it. 00:29:33.006 [2024-10-16 07:12:32.249630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.006 [2024-10-16 07:12:32.249683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.006 [2024-10-16 07:12:32.249702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.006 [2024-10-16 07:12:32.249709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.006 [2024-10-16 07:12:32.249716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.006 [2024-10-16 07:12:32.249733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-10-16 07:12:32.259693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.006 [2024-10-16 07:12:32.259764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.006 [2024-10-16 07:12:32.259782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.006 [2024-10-16 07:12:32.259790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.006 [2024-10-16 07:12:32.259796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.006 [2024-10-16 07:12:32.259813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.006 qpair failed and we were unable to recover it. 00:29:33.006 [2024-10-16 07:12:32.269744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.006 [2024-10-16 07:12:32.269811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.006 [2024-10-16 07:12:32.269830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.006 [2024-10-16 07:12:32.269838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.006 [2024-10-16 07:12:32.269851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.269869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.279758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.279827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.279851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.279859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.279866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.279883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.289762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.289833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.289856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.289864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.289871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.289887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.299792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.299867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.299888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.299899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.299906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.299924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.309862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.309938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.309958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.309965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.309971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.309988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.319841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.319905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.319924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.319931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.319943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.319960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.329766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.329830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.329852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.329860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.329867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.329884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.339932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.340005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.340024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.340031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.340038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.340055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.349965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.350049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.350067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.350075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.350081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.350098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.359988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.360058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.360076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.360083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.360090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.360107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.370002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.370066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.370085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.370093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.370099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.370116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.379980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.380050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.380067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.380075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.380082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.380098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.390149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.390217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.390233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.390241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.390247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.390263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.007 [2024-10-16 07:12:32.400025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.007 [2024-10-16 07:12:32.400085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.007 [2024-10-16 07:12:32.400103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.007 [2024-10-16 07:12:32.400110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.007 [2024-10-16 07:12:32.400117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.007 [2024-10-16 07:12:32.400134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.007 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.410117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.410176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.410195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.410208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.410214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.410231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.420171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.420239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.420258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.420265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.420271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.420288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.430235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.430320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.430337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.430345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.430352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.430368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.440130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.440229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.440246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.440254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.440260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.440277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.450274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.450342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.450359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.450367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.450373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.450388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.460293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.460366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.460384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.460391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.460397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.460414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.470364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.470437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.470454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.470461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.470467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.470484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.480347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.480413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.480431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.480438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.480445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.480461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.490379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.490460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.490510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.490518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.490525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.490555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.008 [2024-10-16 07:12:32.500405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.008 [2024-10-16 07:12:32.500473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.008 [2024-10-16 07:12:32.500498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.008 [2024-10-16 07:12:32.500506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.008 [2024-10-16 07:12:32.500512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.008 [2024-10-16 07:12:32.500530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.008 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.510476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.510562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.510600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.510609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.510617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.510641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.520462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.520539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.520576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.520586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.520593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.520617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.530519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.530598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.530635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.530645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.530653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.530677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.540553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.540625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.540645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.540653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.540660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.540678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.550582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.550652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.550672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.550680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.550686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.550703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.560588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.560649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.560667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.560675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.560681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.560698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.570613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.570678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.570696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.570704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.570710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.570727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.580570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.580638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.580656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.580664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.580670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.580687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.590603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.590674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.590699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.590707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.590714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.590732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.600709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.600780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.600799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.600807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.600813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.600831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.610722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.610784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.610802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.610809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.610816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.610832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.620794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.620867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.272 [2024-10-16 07:12:32.620885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.272 [2024-10-16 07:12:32.620893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.272 [2024-10-16 07:12:32.620899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.272 [2024-10-16 07:12:32.620916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.272 qpair failed and we were unable to recover it. 00:29:33.272 [2024-10-16 07:12:32.630853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.272 [2024-10-16 07:12:32.630919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.630936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.630944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.630950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.630972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.640839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.640910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.640929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.640936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.640943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.640960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.650883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.650952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.650970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.650977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.650984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.651001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.660903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.660972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.660990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.660997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.661004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.661021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.670967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.671037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.671054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.671062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.671068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.671084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.680966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.681026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.681048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.681056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.681062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.681078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.690998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.691066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.691083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.691091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.691097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.691113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.700987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.701053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.701070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.701078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.701084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.701100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.711181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.711262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.711279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.711286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.711293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.711309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.721117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.721214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.721231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.721239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.721250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.721267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.731113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.731176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.731194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.731201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.731208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.731224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.741137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.741210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.741227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.741234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.741241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.741257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.751174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.751249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.751266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.751273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.751280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.751296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.273 [2024-10-16 07:12:32.761203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.273 [2024-10-16 07:12:32.761270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.273 [2024-10-16 07:12:32.761289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.273 [2024-10-16 07:12:32.761297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.273 [2024-10-16 07:12:32.761304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.273 [2024-10-16 07:12:32.761322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.273 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.771231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.771310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.771328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.537 [2024-10-16 07:12:32.771335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.537 [2024-10-16 07:12:32.771342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.537 [2024-10-16 07:12:32.771358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.537 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.781233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.781301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.781318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.537 [2024-10-16 07:12:32.781326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.537 [2024-10-16 07:12:32.781334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.537 [2024-10-16 07:12:32.781350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.537 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.791272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.791342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.791359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.537 [2024-10-16 07:12:32.791368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.537 [2024-10-16 07:12:32.791375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.537 [2024-10-16 07:12:32.791392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.537 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.801192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.801270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.801287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.537 [2024-10-16 07:12:32.801295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.537 [2024-10-16 07:12:32.801302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.537 [2024-10-16 07:12:32.801318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.537 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.811234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.811359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.811376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.537 [2024-10-16 07:12:32.811385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.537 [2024-10-16 07:12:32.811396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.537 [2024-10-16 07:12:32.811413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.537 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.821426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.821497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.821515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.537 [2024-10-16 07:12:32.821523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.537 [2024-10-16 07:12:32.821529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.537 [2024-10-16 07:12:32.821546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.537 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.831459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.831539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.831557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.537 [2024-10-16 07:12:32.831568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.537 [2024-10-16 07:12:32.831575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.537 [2024-10-16 07:12:32.831591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.537 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.841427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.841525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.841543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.537 [2024-10-16 07:12:32.841550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.537 [2024-10-16 07:12:32.841557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.537 [2024-10-16 07:12:32.841573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.537 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.851437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.851494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.851512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.537 [2024-10-16 07:12:32.851519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.537 [2024-10-16 07:12:32.851526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.537 [2024-10-16 07:12:32.851542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.537 qpair failed and we were unable to recover it. 00:29:33.537 [2024-10-16 07:12:32.861498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.537 [2024-10-16 07:12:32.861569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.537 [2024-10-16 07:12:32.861588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.861596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.861602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.861618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.871546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.871611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.871628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.871636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.871642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.871659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.881556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.881617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.881636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.881644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.881650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.881667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.891449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.891515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.891532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.891539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.891546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.891562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.901579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.901648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.901665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.901678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.901685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.901701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.911647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.911731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.911753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.911763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.911771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.911789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.921658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.921720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.921740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.921747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.921754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.921771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.931648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.931705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.931723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.931731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.931737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.931754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.941706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.941773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.941791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.941798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.941805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.941821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.951783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.951871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.951890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.951898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.951905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.951922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.961783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.961848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.961867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.961875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.961881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.961898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.971818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.971888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.971907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.971914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.971921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.971938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.981871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.538 [2024-10-16 07:12:32.981938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.538 [2024-10-16 07:12:32.981956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.538 [2024-10-16 07:12:32.981964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.538 [2024-10-16 07:12:32.981971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.538 [2024-10-16 07:12:32.981987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.538 qpair failed and we were unable to recover it. 00:29:33.538 [2024-10-16 07:12:32.991788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.539 [2024-10-16 07:12:32.991865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.539 [2024-10-16 07:12:32.991883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.539 [2024-10-16 07:12:32.991897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.539 [2024-10-16 07:12:32.991904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.539 [2024-10-16 07:12:32.991921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.539 qpair failed and we were unable to recover it. 00:29:33.539 [2024-10-16 07:12:33.001908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.539 [2024-10-16 07:12:33.001981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.539 [2024-10-16 07:12:33.001999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.539 [2024-10-16 07:12:33.002007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.539 [2024-10-16 07:12:33.002013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.539 [2024-10-16 07:12:33.002029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.539 qpair failed and we were unable to recover it. 00:29:33.539 [2024-10-16 07:12:33.011965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.539 [2024-10-16 07:12:33.012026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.539 [2024-10-16 07:12:33.012044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.539 [2024-10-16 07:12:33.012051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.539 [2024-10-16 07:12:33.012058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.539 [2024-10-16 07:12:33.012074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.539 qpair failed and we were unable to recover it. 00:29:33.539 [2024-10-16 07:12:33.021960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.539 [2024-10-16 07:12:33.022031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.539 [2024-10-16 07:12:33.022049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.539 [2024-10-16 07:12:33.022056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.539 [2024-10-16 07:12:33.022063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.539 [2024-10-16 07:12:33.022079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.539 qpair failed and we were unable to recover it. 00:29:33.539 [2024-10-16 07:12:33.032000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.539 [2024-10-16 07:12:33.032069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.539 [2024-10-16 07:12:33.032087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.539 [2024-10-16 07:12:33.032094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.539 [2024-10-16 07:12:33.032101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.539 [2024-10-16 07:12:33.032118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.539 qpair failed and we were unable to recover it. 00:29:33.801 [2024-10-16 07:12:33.041936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.042005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.042027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.042035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.042041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.042059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.052073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.052140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.052158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.052166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.052173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.052190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.062127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.062200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.062219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.062227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.062233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.062251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.072152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.072226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.072243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.072251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.072257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.072274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.082162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.082243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.082266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.082273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.082280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.082296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.092113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.092183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.092201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.092208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.092215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.092231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.102228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.102312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.102330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.102337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.102344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.102360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.112235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.112295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.112311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.112319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.112325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.112341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.122274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.122335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.122352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.122360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.122367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.122389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.132308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.132400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.132417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.132424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.132431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.132446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.142242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.142325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.142341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.142349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.142355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.142370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.152372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.152435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.152451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.152459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.152465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.152480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.162366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.162429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.162446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.162453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.162460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.162475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.802 qpair failed and we were unable to recover it. 00:29:33.802 [2024-10-16 07:12:33.172400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.802 [2024-10-16 07:12:33.172484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.802 [2024-10-16 07:12:33.172504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.802 [2024-10-16 07:12:33.172512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.802 [2024-10-16 07:12:33.172520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.802 [2024-10-16 07:12:33.172538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.182366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.182428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.182444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.182451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.182458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.182472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.192441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.192503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.192520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.192528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.192534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.192554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.202510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.202566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.202582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.202589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.202596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.202610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.212417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.212475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.212489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.212497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.212507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.212522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.222542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.222603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.222618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.222625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.222631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.222645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.232515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.232577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.232606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.232615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.232622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.232642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.242564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.242616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.242645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.242655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.242662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.242682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.252632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.252695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.252723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.252731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.252739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.252760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.262643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.262708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.262725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.262733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.262740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.262755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.272646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.272694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.272709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.272716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.272722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.272737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.282623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.282671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.282685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.282692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.282699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.282713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:33.803 [2024-10-16 07:12:33.292725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.803 [2024-10-16 07:12:33.292851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.803 [2024-10-16 07:12:33.292866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.803 [2024-10-16 07:12:33.292873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.803 [2024-10-16 07:12:33.292880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:33.803 [2024-10-16 07:12:33.292894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.803 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.302641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.302693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.302707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.302715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.302726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.302741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.312781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.312828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.312842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.312854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.312860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.312875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.322820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.322882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.322897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.322903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.322910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.322924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.332840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.332921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.332934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.332942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.332948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.332962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.342864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.342919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.342934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.342941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.342948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.342962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.352866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.352919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.352933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.352940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.352947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.352960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.362890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.362936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.362951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.362958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.362964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.362978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.372824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.372880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.372895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.372901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.372908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.372922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.382905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.382962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.382976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.382983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.382989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.383002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.392917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.392976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.392991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.393002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.393008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.393026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.402986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.403035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.403049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.403056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.403062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.403076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.413049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.413125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.413139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.413146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.413152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.413166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.066 qpair failed and we were unable to recover it. 00:29:34.066 [2024-10-16 07:12:33.423100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.066 [2024-10-16 07:12:33.423189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.066 [2024-10-16 07:12:33.423203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.066 [2024-10-16 07:12:33.423210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.066 [2024-10-16 07:12:33.423216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.066 [2024-10-16 07:12:33.423230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.432957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.433005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.433018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.433025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.433032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.433045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.443105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.443177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.443191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.443197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.443204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.443217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.453171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.453219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.453233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.453240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.453247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.453261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.463198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.463254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.463267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.463274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.463280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.463294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.473218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.473268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.473282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.473289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.473295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.473309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.483210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.483257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.483271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.483281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.483288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.483301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.493259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.493337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.493351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.493358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.493364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.493378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.503287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.503345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.503359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.503366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.503373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.503387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.513261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.513313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.513326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.513333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.513339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.513353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.523337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.523382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.523395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.523402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.523408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.523422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.533352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.533407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.533420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.533428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.533434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.533448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.543400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.543501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.543517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.543523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.543530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.543547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.553287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.553338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.553353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.553360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.067 [2024-10-16 07:12:33.553366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.067 [2024-10-16 07:12:33.553379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.067 qpair failed and we were unable to recover it. 00:29:34.067 [2024-10-16 07:12:33.563407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.067 [2024-10-16 07:12:33.563454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.067 [2024-10-16 07:12:33.563468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.067 [2024-10-16 07:12:33.563475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.068 [2024-10-16 07:12:33.563481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.068 [2024-10-16 07:12:33.563495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.068 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.573498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.573552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.330 [2024-10-16 07:12:33.573569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.330 [2024-10-16 07:12:33.573577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.330 [2024-10-16 07:12:33.573583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.330 [2024-10-16 07:12:33.573597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.330 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.583397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.583454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.330 [2024-10-16 07:12:33.583467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.330 [2024-10-16 07:12:33.583475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.330 [2024-10-16 07:12:33.583481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.330 [2024-10-16 07:12:33.583495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.330 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.593535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.593591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.330 [2024-10-16 07:12:33.593617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.330 [2024-10-16 07:12:33.593627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.330 [2024-10-16 07:12:33.593635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.330 [2024-10-16 07:12:33.593654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.330 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.603535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.603589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.330 [2024-10-16 07:12:33.603605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.330 [2024-10-16 07:12:33.603612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.330 [2024-10-16 07:12:33.603619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.330 [2024-10-16 07:12:33.603633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.330 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.613596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.613651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.330 [2024-10-16 07:12:33.613677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.330 [2024-10-16 07:12:33.613685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.330 [2024-10-16 07:12:33.613692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.330 [2024-10-16 07:12:33.613716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.330 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.623609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.623666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.330 [2024-10-16 07:12:33.623682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.330 [2024-10-16 07:12:33.623690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.330 [2024-10-16 07:12:33.623696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.330 [2024-10-16 07:12:33.623711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.330 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.633610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.633656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.330 [2024-10-16 07:12:33.633670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.330 [2024-10-16 07:12:33.633677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.330 [2024-10-16 07:12:33.633683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.330 [2024-10-16 07:12:33.633697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.330 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.643646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.643695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.330 [2024-10-16 07:12:33.643708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.330 [2024-10-16 07:12:33.643716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.330 [2024-10-16 07:12:33.643722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.330 [2024-10-16 07:12:33.643736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.330 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.653704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.653789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.330 [2024-10-16 07:12:33.653802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.330 [2024-10-16 07:12:33.653809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.330 [2024-10-16 07:12:33.653815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.330 [2024-10-16 07:12:33.653829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.330 qpair failed and we were unable to recover it. 00:29:34.330 [2024-10-16 07:12:33.663857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.330 [2024-10-16 07:12:33.663921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.663939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.663946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.663952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.663966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.673673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.673723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.673738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.673745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.673752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.673767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.683783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.683850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.683865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.683872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.683879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.683893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.693832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.693892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.693906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.693913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.693919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.693933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.703821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.703906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.703920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.703927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.703934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.703955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.713857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.713908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.713921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.713928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.713934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.713948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.723874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.723919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.723932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.723939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.723946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.723959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.733908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.733954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.733968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.733975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.733981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.733995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.743952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.744010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.744023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.744030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.744037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.744050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.753984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.754039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.754056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.754063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.754069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.754083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.763986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.764031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.764045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.764052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.764058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.764072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.774040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.774118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.774133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.774141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.774147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.774165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.784090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.784146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.784161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.784168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.784175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.784188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.794075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.794122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.794136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.331 [2024-10-16 07:12:33.794143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.331 [2024-10-16 07:12:33.794152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.331 [2024-10-16 07:12:33.794166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.331 qpair failed and we were unable to recover it. 00:29:34.331 [2024-10-16 07:12:33.804101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.331 [2024-10-16 07:12:33.804179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.331 [2024-10-16 07:12:33.804193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.332 [2024-10-16 07:12:33.804199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.332 [2024-10-16 07:12:33.804206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.332 [2024-10-16 07:12:33.804220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.332 qpair failed and we were unable to recover it. 00:29:34.332 [2024-10-16 07:12:33.814149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.332 [2024-10-16 07:12:33.814199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.332 [2024-10-16 07:12:33.814212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.332 [2024-10-16 07:12:33.814219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.332 [2024-10-16 07:12:33.814225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.332 [2024-10-16 07:12:33.814238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.332 qpair failed and we were unable to recover it. 00:29:34.332 [2024-10-16 07:12:33.824203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.332 [2024-10-16 07:12:33.824256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.332 [2024-10-16 07:12:33.824270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.332 [2024-10-16 07:12:33.824277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.332 [2024-10-16 07:12:33.824283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.332 [2024-10-16 07:12:33.824297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.332 qpair failed and we were unable to recover it. 00:29:34.594 [2024-10-16 07:12:33.834153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.594 [2024-10-16 07:12:33.834201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.594 [2024-10-16 07:12:33.834214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.594 [2024-10-16 07:12:33.834222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.594 [2024-10-16 07:12:33.834228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.594 [2024-10-16 07:12:33.834241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.594 qpair failed and we were unable to recover it. 00:29:34.594 [2024-10-16 07:12:33.844187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.594 [2024-10-16 07:12:33.844242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.594 [2024-10-16 07:12:33.844255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.594 [2024-10-16 07:12:33.844263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.594 [2024-10-16 07:12:33.844270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.594 [2024-10-16 07:12:33.844285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.594 qpair failed and we were unable to recover it. 00:29:34.594 [2024-10-16 07:12:33.854230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.854323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.854337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.854344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.854350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.854364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.864283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.864335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.864349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.864356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.864362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.864376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.874293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.874346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.874359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.874366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.874373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.874387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.884446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.884504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.884518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.884528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.884534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.884548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.894403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.894487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.894501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.894508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.894514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.894528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.904403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.904455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.904468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.904475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.904481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.904495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.914414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.914462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.914475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.914482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.914488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.914502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.924429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.924479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.924492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.924499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.924506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.924519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.934467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.934526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.934551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.934560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.934567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.934586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.944394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.944451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.944469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.944476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.944484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.944504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.954502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.954566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.954581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.954588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.954595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.954609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.964493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.964541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.964555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.964563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.964569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.964583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.974589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.974635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.974649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.974660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.974667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.595 [2024-10-16 07:12:33.974681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.595 qpair failed and we were unable to recover it. 00:29:34.595 [2024-10-16 07:12:33.984639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.595 [2024-10-16 07:12:33.984696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.595 [2024-10-16 07:12:33.984710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.595 [2024-10-16 07:12:33.984717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.595 [2024-10-16 07:12:33.984723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:33.984738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:33.994610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:33.994658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:33.994672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:33.994679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:33.994685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:33.994699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:34.004642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:34.004692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:34.004706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:34.004713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:34.004719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:34.004733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:34.014670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:34.014745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:34.014759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:34.014766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:34.014772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:34.014786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:34.024607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:34.024664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:34.024678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:34.024685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:34.024692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:34.024706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:34.034754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:34.034804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:34.034818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:34.034825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:34.034831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:34.034851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:34.044719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:34.044817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:34.044831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:34.044838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:34.044849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:34.044864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:34.054767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:34.054814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:34.054827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:34.054834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:34.054841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:34.054859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:34.064793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:34.064848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:34.064865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:34.064872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:34.064878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:34.064893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:34.074864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:34.074916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:34.074929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:34.074936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:34.074943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:34.074956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.596 [2024-10-16 07:12:34.084825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.596 [2024-10-16 07:12:34.084876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.596 [2024-10-16 07:12:34.084889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.596 [2024-10-16 07:12:34.084896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.596 [2024-10-16 07:12:34.084903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.596 [2024-10-16 07:12:34.084917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.596 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.094924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.094973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.094988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.094996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.095003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.095016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.104821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.104882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.104896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.104902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.104909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.104927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.114904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.114953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.114966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.114973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.114979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.114993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.124932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.124989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.125002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.125009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.125015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.125029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.135014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.135066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.135080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.135087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.135093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.135106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.145065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.145121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.145134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.145141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.145147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.145161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.155028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.155082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.155098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.155105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.155112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.155125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.165054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.165101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.165114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.165121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.165127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.165141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.175113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.175162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.175176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.175183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.175190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.175204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.185136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.185195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.185208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.185215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.185221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.185235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.195160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.195211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.195225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.195232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.195238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.195256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.205158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.205208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.205221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.205228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.205234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.205248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.859 qpair failed and we were unable to recover it. 00:29:34.859 [2024-10-16 07:12:34.215221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.859 [2024-10-16 07:12:34.215271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.859 [2024-10-16 07:12:34.215286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.859 [2024-10-16 07:12:34.215293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.859 [2024-10-16 07:12:34.215300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.859 [2024-10-16 07:12:34.215317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.225268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.225322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.225336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.225343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.225349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.225363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.235259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.235311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.235325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.235331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.235338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.235351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.245290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.245339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.245357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.245365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.245371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.245385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.255304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.255349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.255362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.255370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.255376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.255389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.265383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.265438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.265452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.265459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.265465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.265479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.275349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.275403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.275416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.275423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.275429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.275443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.285366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.285411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.285424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.285431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.285441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.285454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.295418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.295465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.295479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.295486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.295492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.295506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.305363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.305460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.305474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.305481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.305487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.305501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.315480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.315530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.315543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.315550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.315556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.315569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.325498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.325542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.325556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.325562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.325569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.325582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.335524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.335575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.335589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.335596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.335602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.335616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.345598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.345655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.345668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.345676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.345682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.345695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:34.860 [2024-10-16 07:12:34.355577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.860 [2024-10-16 07:12:34.355628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.860 [2024-10-16 07:12:34.355641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.860 [2024-10-16 07:12:34.355648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.860 [2024-10-16 07:12:34.355654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:34.860 [2024-10-16 07:12:34.355668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.860 qpair failed and we were unable to recover it. 00:29:35.122 [2024-10-16 07:12:34.365606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.122 [2024-10-16 07:12:34.365655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.122 [2024-10-16 07:12:34.365669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.122 [2024-10-16 07:12:34.365676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.122 [2024-10-16 07:12:34.365682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.122 [2024-10-16 07:12:34.365696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.122 qpair failed and we were unable to recover it. 00:29:35.122 [2024-10-16 07:12:34.375608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.122 [2024-10-16 07:12:34.375651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.122 [2024-10-16 07:12:34.375665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.122 [2024-10-16 07:12:34.375672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.122 [2024-10-16 07:12:34.375682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.122 [2024-10-16 07:12:34.375696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.122 qpair failed and we were unable to recover it. 00:29:35.122 [2024-10-16 07:12:34.385699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.122 [2024-10-16 07:12:34.385753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.122 [2024-10-16 07:12:34.385766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.122 [2024-10-16 07:12:34.385773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.122 [2024-10-16 07:12:34.385779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.122 [2024-10-16 07:12:34.385793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.122 qpair failed and we were unable to recover it. 00:29:35.122 [2024-10-16 07:12:34.395700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.122 [2024-10-16 07:12:34.395760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.122 [2024-10-16 07:12:34.395774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.122 [2024-10-16 07:12:34.395781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.122 [2024-10-16 07:12:34.395787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.395801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.405708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.405755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.405768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.405775] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.405781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.405795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.415741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.415795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.415808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.415815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.415822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.415835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.425810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.425866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.425880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.425887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.425893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.425907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.435670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.435728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.435742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.435749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.435755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.435769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.445814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.445863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.445876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.445883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.445889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.445903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.455815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.455863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.455877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.455883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.455890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.455903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.465922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.465977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.465990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.466004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.466011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.466025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.475788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.475839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.475857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.475865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.475871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.475885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.485927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.485978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.485992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.485999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.486005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.486019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.495959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.496005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.496018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.496025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.496031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.496045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.506036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.506088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.506101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.506108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.506115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.506128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.516030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.516083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.516096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.516103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.516109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.516123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.526074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.526120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.526133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.526140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.526147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.526160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.123 [2024-10-16 07:12:34.536071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.123 [2024-10-16 07:12:34.536118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.123 [2024-10-16 07:12:34.536132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.123 [2024-10-16 07:12:34.536138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.123 [2024-10-16 07:12:34.536145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.123 [2024-10-16 07:12:34.536158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.123 qpair failed and we were unable to recover it. 00:29:35.124 [2024-10-16 07:12:34.546155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.124 [2024-10-16 07:12:34.546210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.124 [2024-10-16 07:12:34.546233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.124 [2024-10-16 07:12:34.546241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.124 [2024-10-16 07:12:34.546247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.124 [2024-10-16 07:12:34.546266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.124 qpair failed and we were unable to recover it. 00:29:35.124 [2024-10-16 07:12:34.556148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.124 [2024-10-16 07:12:34.556232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.124 [2024-10-16 07:12:34.556247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.124 [2024-10-16 07:12:34.556257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.124 [2024-10-16 07:12:34.556264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.124 [2024-10-16 07:12:34.556278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.124 qpair failed and we were unable to recover it. 00:29:35.124 [2024-10-16 07:12:34.566161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.124 [2024-10-16 07:12:34.566210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.124 [2024-10-16 07:12:34.566224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.124 [2024-10-16 07:12:34.566231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.124 [2024-10-16 07:12:34.566238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.124 [2024-10-16 07:12:34.566251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.124 qpair failed and we were unable to recover it. 00:29:35.124 [2024-10-16 07:12:34.576181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.124 [2024-10-16 07:12:34.576228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.124 [2024-10-16 07:12:34.576241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.124 [2024-10-16 07:12:34.576248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.124 [2024-10-16 07:12:34.576255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.124 [2024-10-16 07:12:34.576269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.124 qpair failed and we were unable to recover it. 00:29:35.124 [2024-10-16 07:12:34.586221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.124 [2024-10-16 07:12:34.586284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.124 [2024-10-16 07:12:34.586297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.124 [2024-10-16 07:12:34.586304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.124 [2024-10-16 07:12:34.586311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.124 [2024-10-16 07:12:34.586325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.124 qpair failed and we were unable to recover it. 00:29:35.124 [2024-10-16 07:12:34.596213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.124 [2024-10-16 07:12:34.596267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.124 [2024-10-16 07:12:34.596281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.124 [2024-10-16 07:12:34.596288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.124 [2024-10-16 07:12:34.596294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.124 [2024-10-16 07:12:34.596309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.124 qpair failed and we were unable to recover it. 00:29:35.124 [2024-10-16 07:12:34.606275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.124 [2024-10-16 07:12:34.606323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.124 [2024-10-16 07:12:34.606336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.124 [2024-10-16 07:12:34.606343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.124 [2024-10-16 07:12:34.606350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.124 [2024-10-16 07:12:34.606364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.124 qpair failed and we were unable to recover it. 00:29:35.124 [2024-10-16 07:12:34.616290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.124 [2024-10-16 07:12:34.616340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.124 [2024-10-16 07:12:34.616354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.124 [2024-10-16 07:12:34.616361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.124 [2024-10-16 07:12:34.616367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.124 [2024-10-16 07:12:34.616380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.124 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.626361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.626411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.626425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.626432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.626438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.387 [2024-10-16 07:12:34.626452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.387 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.636399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.636448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.636461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.636469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.636475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.387 [2024-10-16 07:12:34.636488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.387 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.646344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.646388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.646405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.646413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.646419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.387 [2024-10-16 07:12:34.646432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.387 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.656424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.656520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.656534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.656541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.656547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.387 [2024-10-16 07:12:34.656561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.387 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.666463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.666529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.666542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.666549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.666556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.387 [2024-10-16 07:12:34.666569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.387 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.676457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.676517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.676542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.676551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.676559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.387 [2024-10-16 07:12:34.676577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.387 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.686474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.686522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.686538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.686545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.686552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.387 [2024-10-16 07:12:34.686572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.387 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.696502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.696555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.696581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.696589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.696596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.387 [2024-10-16 07:12:34.696616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.387 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.706544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.706604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.706630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.706638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.706645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.387 [2024-10-16 07:12:34.706664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.387 qpair failed and we were unable to recover it. 00:29:35.387 [2024-10-16 07:12:34.716546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.387 [2024-10-16 07:12:34.716618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.387 [2024-10-16 07:12:34.716643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.387 [2024-10-16 07:12:34.716652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.387 [2024-10-16 07:12:34.716659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.716678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.726565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.726618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.726633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.726641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.726647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.726662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.736475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.736524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.736542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.736549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.736556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.736570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.746701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.746756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.746769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.746776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.746783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.746797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.756530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.756580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.756594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.756601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.756608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.756621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.766681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.766728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.766742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.766749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.766755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.766769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.776761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.776838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.776855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.776862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.776871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.776885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.786785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.786838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.786856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.786863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.786869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.786883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.796773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.796827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.796840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.796851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.796857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.796871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.806801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.806850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.806864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.806871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.806877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.806891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.816708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.816757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.816770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.816777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.816784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.816797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.826913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.826971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.826985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.826992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.826998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.827012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.836876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.836927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.836940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.836947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.836954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.836967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.846921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.846970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.846984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.846990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.846998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.847012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.856936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.388 [2024-10-16 07:12:34.856985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.388 [2024-10-16 07:12:34.856999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.388 [2024-10-16 07:12:34.857006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.388 [2024-10-16 07:12:34.857013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.388 [2024-10-16 07:12:34.857027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.388 qpair failed and we were unable to recover it. 00:29:35.388 [2024-10-16 07:12:34.867004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.389 [2024-10-16 07:12:34.867058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.389 [2024-10-16 07:12:34.867072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.389 [2024-10-16 07:12:34.867079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.389 [2024-10-16 07:12:34.867088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.389 [2024-10-16 07:12:34.867103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.389 qpair failed and we were unable to recover it. 00:29:35.389 [2024-10-16 07:12:34.876991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.389 [2024-10-16 07:12:34.877058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.389 [2024-10-16 07:12:34.877071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.389 [2024-10-16 07:12:34.877078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.389 [2024-10-16 07:12:34.877085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.389 [2024-10-16 07:12:34.877098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.389 qpair failed and we were unable to recover it. 00:29:35.651 [2024-10-16 07:12:34.887001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.651 [2024-10-16 07:12:34.887049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.651 [2024-10-16 07:12:34.887062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.651 [2024-10-16 07:12:34.887069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.651 [2024-10-16 07:12:34.887076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.651 [2024-10-16 07:12:34.887090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.651 qpair failed and we were unable to recover it. 00:29:35.651 [2024-10-16 07:12:34.897059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.651 [2024-10-16 07:12:34.897107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.651 [2024-10-16 07:12:34.897121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.651 [2024-10-16 07:12:34.897128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.651 [2024-10-16 07:12:34.897134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.651 [2024-10-16 07:12:34.897148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.651 qpair failed and we were unable to recover it. 00:29:35.651 [2024-10-16 07:12:34.907101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.651 [2024-10-16 07:12:34.907162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.651 [2024-10-16 07:12:34.907175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.651 [2024-10-16 07:12:34.907183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.651 [2024-10-16 07:12:34.907189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.651 [2024-10-16 07:12:34.907202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.651 qpair failed and we were unable to recover it. 00:29:35.651 [2024-10-16 07:12:34.917139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.651 [2024-10-16 07:12:34.917198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.651 [2024-10-16 07:12:34.917211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.651 [2024-10-16 07:12:34.917219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.651 [2024-10-16 07:12:34.917225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.651 [2024-10-16 07:12:34.917239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.651 qpair failed and we were unable to recover it. 00:29:35.651 [2024-10-16 07:12:34.927133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.651 [2024-10-16 07:12:34.927183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.651 [2024-10-16 07:12:34.927197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:34.927204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:34.927210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:34.927223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:34.937030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:34.937076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:34.937090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:34.937097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:34.937103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:34.937117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:34.947233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:34.947308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:34.947322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:34.947330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:34.947338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:34.947352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:34.957179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:34.957228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:34.957241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:34.957252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:34.957258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:34.957272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:34.967223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:34.967297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:34.967311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:34.967318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:34.967324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:34.967338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:34.977262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:34.977310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:34.977324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:34.977331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:34.977337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:34.977351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:34.987308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:34.987363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:34.987376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:34.987383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:34.987390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:34.987403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:34.997235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:34.997281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:34.997295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:34.997302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:34.997308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:34.997321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:35.007376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:35.007469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:35.007483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:35.007490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:35.007497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:35.007511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:35.017337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:35.017380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:35.017396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:35.017403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:35.017409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:35.017423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:35.027444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:35.027515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:35.027529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:35.027536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:35.027542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:35.027556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:35.037439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.652 [2024-10-16 07:12:35.037486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.652 [2024-10-16 07:12:35.037499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.652 [2024-10-16 07:12:35.037506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.652 [2024-10-16 07:12:35.037512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.652 [2024-10-16 07:12:35.037526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.652 qpair failed and we were unable to recover it. 00:29:35.652 [2024-10-16 07:12:35.047453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.047540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.047554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.047564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.047570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.047585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.057464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.057513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.057527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.057534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.057540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.057554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.067546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.067598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.067612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.067619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.067625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.067639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.077514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.077564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.077577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.077584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.077590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.077603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.087549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.087596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.087610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.087617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.087623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.087637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.097583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.097655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.097669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.097676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.097683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.097697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.107694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.107764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.107790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.107799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.107806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.107825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.117646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.117697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.117713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.117720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.117727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.117742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.127662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.127707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.127722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.127729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.127735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.127749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.137698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.137744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.137762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.137770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.137776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.137791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.653 [2024-10-16 07:12:35.147755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.653 [2024-10-16 07:12:35.147806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.653 [2024-10-16 07:12:35.147820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.653 [2024-10-16 07:12:35.147827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.653 [2024-10-16 07:12:35.147834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.653 [2024-10-16 07:12:35.147853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.653 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.157705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.157772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.157786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.157794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.157800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.157814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.167774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.167825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.167839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.167851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.167858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.167871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.177802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.177854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.177869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.177877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.177883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.177902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.187881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.187933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.187947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.187954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.187961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.187975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.197745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.197794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.197808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.197815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.197822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.197835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.207882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.207926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.207940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.207947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.207954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.207968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.217907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.217967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.217980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.217987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.217994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.218007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.227959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.228014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.228034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.228041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.228047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.228062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.237944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.237993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.238007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.238014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.238020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.238034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.248003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.248053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.248066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.248074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.248080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.248093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.258034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.258084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.258097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.258104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.258111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.258124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.268088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.268143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.916 [2024-10-16 07:12:35.268156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.916 [2024-10-16 07:12:35.268163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.916 [2024-10-16 07:12:35.268170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.916 [2024-10-16 07:12:35.268187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.916 qpair failed and we were unable to recover it. 00:29:35.916 [2024-10-16 07:12:35.278093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.916 [2024-10-16 07:12:35.278149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.278163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.278170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.278176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.278190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.288090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.288151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.288164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.288172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.288178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.288192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.298139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.298185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.298199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.298206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.298212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.298225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.308209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.308264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.308278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.308285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.308292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.308306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.318175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.318225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.318242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.318249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.318256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.318270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.328186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.328230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.328244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.328251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.328257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.328271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.338235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.338282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.338296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.338303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.338309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.338323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.348308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.348363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.348377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.348384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.348392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.348406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.358202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.358261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.358274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.358281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.358291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.358305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.368343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.368388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.368402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.368409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.368415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.368429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.378351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.378405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.378419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.378426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.378432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.378446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.388473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.388545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.388558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.388565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.388571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.388585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.398439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.917 [2024-10-16 07:12:35.398487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.917 [2024-10-16 07:12:35.398502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.917 [2024-10-16 07:12:35.398509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.917 [2024-10-16 07:12:35.398515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.917 [2024-10-16 07:12:35.398529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.917 qpair failed and we were unable to recover it. 00:29:35.917 [2024-10-16 07:12:35.408446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.918 [2024-10-16 07:12:35.408509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.918 [2024-10-16 07:12:35.408523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.918 [2024-10-16 07:12:35.408530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.918 [2024-10-16 07:12:35.408536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:35.918 [2024-10-16 07:12:35.408550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.918 qpair failed and we were unable to recover it. 00:29:36.180 [2024-10-16 07:12:35.418468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.180 [2024-10-16 07:12:35.418512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.180 [2024-10-16 07:12:35.418526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.180 [2024-10-16 07:12:35.418533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.180 [2024-10-16 07:12:35.418539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.180 [2024-10-16 07:12:35.418553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.180 qpair failed and we were unable to recover it. 00:29:36.180 [2024-10-16 07:12:35.428551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.180 [2024-10-16 07:12:35.428607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.180 [2024-10-16 07:12:35.428621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.180 [2024-10-16 07:12:35.428628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.180 [2024-10-16 07:12:35.428634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.180 [2024-10-16 07:12:35.428648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.180 qpair failed and we were unable to recover it. 00:29:36.180 [2024-10-16 07:12:35.438546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.180 [2024-10-16 07:12:35.438601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.180 [2024-10-16 07:12:35.438614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.180 [2024-10-16 07:12:35.438621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.180 [2024-10-16 07:12:35.438627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.180 [2024-10-16 07:12:35.438641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.180 qpair failed and we were unable to recover it. 00:29:36.180 [2024-10-16 07:12:35.448547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.180 [2024-10-16 07:12:35.448599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.180 [2024-10-16 07:12:35.448624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.180 [2024-10-16 07:12:35.448638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.180 [2024-10-16 07:12:35.448645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.180 [2024-10-16 07:12:35.448664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.180 qpair failed and we were unable to recover it. 00:29:36.180 [2024-10-16 07:12:35.458580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.180 [2024-10-16 07:12:35.458631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.180 [2024-10-16 07:12:35.458647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.180 [2024-10-16 07:12:35.458654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.180 [2024-10-16 07:12:35.458661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.180 [2024-10-16 07:12:35.458676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.180 qpair failed and we were unable to recover it. 00:29:36.180 [2024-10-16 07:12:35.468619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.180 [2024-10-16 07:12:35.468675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.180 [2024-10-16 07:12:35.468690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.180 [2024-10-16 07:12:35.468697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.180 [2024-10-16 07:12:35.468703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.180 [2024-10-16 07:12:35.468717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.180 qpair failed and we were unable to recover it. 00:29:36.180 [2024-10-16 07:12:35.478649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.180 [2024-10-16 07:12:35.478702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.180 [2024-10-16 07:12:35.478716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.180 [2024-10-16 07:12:35.478723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.180 [2024-10-16 07:12:35.478730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.180 [2024-10-16 07:12:35.478744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.180 qpair failed and we were unable to recover it. 00:29:36.180 [2024-10-16 07:12:35.488677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.180 [2024-10-16 07:12:35.488725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.180 [2024-10-16 07:12:35.488739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.180 [2024-10-16 07:12:35.488746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.180 [2024-10-16 07:12:35.488752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.180 [2024-10-16 07:12:35.488767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.180 qpair failed and we were unable to recover it. 00:29:36.180 [2024-10-16 07:12:35.498684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.180 [2024-10-16 07:12:35.498733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.180 [2024-10-16 07:12:35.498747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.180 [2024-10-16 07:12:35.498754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.180 [2024-10-16 07:12:35.498760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.180 [2024-10-16 07:12:35.498774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.508768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.508823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.508837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.508847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.508854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.508868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.518770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.518816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.518830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.518837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.518848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.518863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.528782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.528831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.528849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.528856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.528862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.528877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.538801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.538859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.538873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.538884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.538890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.538904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.548855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.548910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.548924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.548931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.548938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.548952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.558886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.558941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.558955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.558962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.558968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.558982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.568889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.568937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.568952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.568959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.568965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.568979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.578912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.578956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.578969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.578976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.578982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.578996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.588949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.589006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.589019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.589026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.589032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.589046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.598987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.599039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.599053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.599060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.599066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.599080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.609049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.609112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.609125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.609132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.609138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.609152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.619029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.619081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.619094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.619101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.619108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.619121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.628970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.629035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.629052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.629059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.629065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.629079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.639095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.181 [2024-10-16 07:12:35.639146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.181 [2024-10-16 07:12:35.639160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.181 [2024-10-16 07:12:35.639167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.181 [2024-10-16 07:12:35.639173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.181 [2024-10-16 07:12:35.639187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.181 qpair failed and we were unable to recover it. 00:29:36.181 [2024-10-16 07:12:35.649017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.182 [2024-10-16 07:12:35.649069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.182 [2024-10-16 07:12:35.649082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.182 [2024-10-16 07:12:35.649090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.182 [2024-10-16 07:12:35.649096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.182 [2024-10-16 07:12:35.649110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.182 qpair failed and we were unable to recover it. 00:29:36.182 [2024-10-16 07:12:35.659135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.182 [2024-10-16 07:12:35.659177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.182 [2024-10-16 07:12:35.659191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.182 [2024-10-16 07:12:35.659198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.182 [2024-10-16 07:12:35.659204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.182 [2024-10-16 07:12:35.659217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.182 qpair failed and we were unable to recover it. 00:29:36.182 [2024-10-16 07:12:35.669212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.182 [2024-10-16 07:12:35.669294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.182 [2024-10-16 07:12:35.669308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.182 [2024-10-16 07:12:35.669315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.182 [2024-10-16 07:12:35.669321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.182 [2024-10-16 07:12:35.669339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.182 qpair failed and we were unable to recover it. 00:29:36.444 [2024-10-16 07:12:35.679209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.444 [2024-10-16 07:12:35.679261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.444 [2024-10-16 07:12:35.679275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.444 [2024-10-16 07:12:35.679282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.444 [2024-10-16 07:12:35.679288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.444 [2024-10-16 07:12:35.679301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.444 qpair failed and we were unable to recover it. 00:29:36.444 [2024-10-16 07:12:35.689224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.444 [2024-10-16 07:12:35.689271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.444 [2024-10-16 07:12:35.689285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.444 [2024-10-16 07:12:35.689292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.444 [2024-10-16 07:12:35.689299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.444 [2024-10-16 07:12:35.689312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.444 qpair failed and we were unable to recover it. 00:29:36.444 [2024-10-16 07:12:35.699255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.444 [2024-10-16 07:12:35.699304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.444 [2024-10-16 07:12:35.699318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.444 [2024-10-16 07:12:35.699324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.444 [2024-10-16 07:12:35.699331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.444 [2024-10-16 07:12:35.699344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.444 qpair failed and we were unable to recover it. 00:29:36.444 [2024-10-16 07:12:35.709325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.444 [2024-10-16 07:12:35.709378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.444 [2024-10-16 07:12:35.709392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.444 [2024-10-16 07:12:35.709399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.444 [2024-10-16 07:12:35.709405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.444 [2024-10-16 07:12:35.709419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.444 qpair failed and we were unable to recover it. 00:29:36.444 [2024-10-16 07:12:35.719310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.719368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.719386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.719393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.719399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.719413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.729332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.729380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.729393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.729400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.729407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.729420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.739232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.739282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.739296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.739303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.739309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.739323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.749440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.749521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.749535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.749542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.749548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.749562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.759438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.759488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.759502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.759509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.759515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.759532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.769439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.769491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.769505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.769512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.769518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.769532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.779442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.779487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.779501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.779508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.779514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.779528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.789526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.789580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.789593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.789600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.789606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.789620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.799531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.799579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.799592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.799599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.799605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.799619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.809554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.809607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.809637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.809646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.809653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.809673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.819629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.819719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.819744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.819753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.819760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.819779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.829658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.829736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.829751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.829759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.829765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.829780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.839650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.839708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.839722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.839729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.839736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.839750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.849669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.849720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.849734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.445 [2024-10-16 07:12:35.849741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.445 [2024-10-16 07:12:35.849752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.445 [2024-10-16 07:12:35.849767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.445 qpair failed and we were unable to recover it. 00:29:36.445 [2024-10-16 07:12:35.859681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.445 [2024-10-16 07:12:35.859731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.445 [2024-10-16 07:12:35.859744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.446 [2024-10-16 07:12:35.859751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.446 [2024-10-16 07:12:35.859758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.446 [2024-10-16 07:12:35.859771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.446 qpair failed and we were unable to recover it. 00:29:36.446 [2024-10-16 07:12:35.869777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.446 [2024-10-16 07:12:35.869879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.446 [2024-10-16 07:12:35.869894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.446 [2024-10-16 07:12:35.869901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.446 [2024-10-16 07:12:35.869907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.446 [2024-10-16 07:12:35.869922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.446 qpair failed and we were unable to recover it. 00:29:36.446 [2024-10-16 07:12:35.879747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.446 [2024-10-16 07:12:35.879798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.446 [2024-10-16 07:12:35.879812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.446 [2024-10-16 07:12:35.879819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.446 [2024-10-16 07:12:35.879825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.446 [2024-10-16 07:12:35.879839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.446 qpair failed and we were unable to recover it. 00:29:36.446 [2024-10-16 07:12:35.889763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.446 [2024-10-16 07:12:35.889814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.446 [2024-10-16 07:12:35.889827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.446 [2024-10-16 07:12:35.889835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.446 [2024-10-16 07:12:35.889841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.446 [2024-10-16 07:12:35.889859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.446 qpair failed and we were unable to recover it. 00:29:36.446 [2024-10-16 07:12:35.899792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.446 [2024-10-16 07:12:35.899882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.446 [2024-10-16 07:12:35.899896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.446 [2024-10-16 07:12:35.899903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.446 [2024-10-16 07:12:35.899910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.446 [2024-10-16 07:12:35.899924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.446 qpair failed and we were unable to recover it. 00:29:36.446 [2024-10-16 07:12:35.909848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.446 [2024-10-16 07:12:35.909906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.446 [2024-10-16 07:12:35.909919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.446 [2024-10-16 07:12:35.909926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.446 [2024-10-16 07:12:35.909932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.446 [2024-10-16 07:12:35.909946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.446 qpair failed and we were unable to recover it. 00:29:36.446 [2024-10-16 07:12:35.919873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.446 [2024-10-16 07:12:35.919961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.446 [2024-10-16 07:12:35.919975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.446 [2024-10-16 07:12:35.919982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.446 [2024-10-16 07:12:35.919988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.446 [2024-10-16 07:12:35.920002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.446 qpair failed and we were unable to recover it. 00:29:36.446 [2024-10-16 07:12:35.929876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.446 [2024-10-16 07:12:35.929928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.446 [2024-10-16 07:12:35.929942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.446 [2024-10-16 07:12:35.929949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.446 [2024-10-16 07:12:35.929955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.446 [2024-10-16 07:12:35.929969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.446 qpair failed and we were unable to recover it. 00:29:36.446 [2024-10-16 07:12:35.939918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.446 [2024-10-16 07:12:35.939966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.446 [2024-10-16 07:12:35.939980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.446 [2024-10-16 07:12:35.939987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.446 [2024-10-16 07:12:35.939997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.446 [2024-10-16 07:12:35.940011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.446 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:35.950000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:35.950053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:35.950067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:35.950074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:35.950080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.709 [2024-10-16 07:12:35.950094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:35.959983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:35.960029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:35.960043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:35.960050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:35.960056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.709 [2024-10-16 07:12:35.960070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:35.969991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:35.970037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:35.970051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:35.970058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:35.970064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.709 [2024-10-16 07:12:35.970077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:35.980010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:35.980056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:35.980070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:35.980077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:35.980083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.709 [2024-10-16 07:12:35.980096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:35.990095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:35.990174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:35.990188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:35.990195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:35.990201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.709 [2024-10-16 07:12:35.990215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:36.000082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:36.000133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:36.000146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:36.000153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:36.000160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.709 [2024-10-16 07:12:36.000173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:36.009983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:36.010029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:36.010043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:36.010050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:36.010056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.709 [2024-10-16 07:12:36.010070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:36.020109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:36.020154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:36.020168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:36.020175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:36.020181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.709 [2024-10-16 07:12:36.020195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:36.030223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:36.030274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:36.030287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:36.030298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:36.030304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.709 [2024-10-16 07:12:36.030319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-10-16 07:12:36.040211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.709 [2024-10-16 07:12:36.040261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.709 [2024-10-16 07:12:36.040274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.709 [2024-10-16 07:12:36.040281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.709 [2024-10-16 07:12:36.040288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.040302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.050218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.050260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.050274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.050281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.050287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.050300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.060226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.060270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.060283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.060290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.060296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.060310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.070204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.070264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.070278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.070285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.070291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.070305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.080331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.080380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.080393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.080400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.080406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.080420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.090336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.090382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.090395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.090402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.090408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.090422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.100236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.100280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.100293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.100300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.100306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.100320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.110412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.110467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.110481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.110487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.110494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.110507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.120434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.120488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.120502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.120512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.120518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.120532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.130465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.130510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.130524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.130531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.130537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.130551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.140438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.140486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.140500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.140507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.140513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.140527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.150556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.150608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.150621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.150628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.150635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.150648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.160551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.160605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.160619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.160626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.160632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.160646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.170587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.170637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.170651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.170658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.170665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.170679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-10-16 07:12:36.180567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.710 [2024-10-16 07:12:36.180610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.710 [2024-10-16 07:12:36.180623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.710 [2024-10-16 07:12:36.180630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.710 [2024-10-16 07:12:36.180637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.710 [2024-10-16 07:12:36.180650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.711 [2024-10-16 07:12:36.190693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.711 [2024-10-16 07:12:36.190774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.711 [2024-10-16 07:12:36.190787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.711 [2024-10-16 07:12:36.190794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.711 [2024-10-16 07:12:36.190800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.711 [2024-10-16 07:12:36.190814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-10-16 07:12:36.200657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.711 [2024-10-16 07:12:36.200706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.711 [2024-10-16 07:12:36.200720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.711 [2024-10-16 07:12:36.200727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.711 [2024-10-16 07:12:36.200733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.711 [2024-10-16 07:12:36.200747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.978 [2024-10-16 07:12:36.210681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.978 [2024-10-16 07:12:36.210774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.978 [2024-10-16 07:12:36.210791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.978 [2024-10-16 07:12:36.210798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.978 [2024-10-16 07:12:36.210805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.978 [2024-10-16 07:12:36.210818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.978 qpair failed and we were unable to recover it. 00:29:36.978 [2024-10-16 07:12:36.220700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.978 [2024-10-16 07:12:36.220747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.978 [2024-10-16 07:12:36.220760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.978 [2024-10-16 07:12:36.220768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.978 [2024-10-16 07:12:36.220774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.978 [2024-10-16 07:12:36.220787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.978 qpair failed and we were unable to recover it. 00:29:36.978 [2024-10-16 07:12:36.230751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.978 [2024-10-16 07:12:36.230817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.978 [2024-10-16 07:12:36.230831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.978 [2024-10-16 07:12:36.230838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.978 [2024-10-16 07:12:36.230848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.978 [2024-10-16 07:12:36.230862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.978 qpair failed and we were unable to recover it. 00:29:36.978 [2024-10-16 07:12:36.240760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.978 [2024-10-16 07:12:36.240811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.978 [2024-10-16 07:12:36.240826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.978 [2024-10-16 07:12:36.240834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.978 [2024-10-16 07:12:36.240840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.978 [2024-10-16 07:12:36.240862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.978 qpair failed and we were unable to recover it. 00:29:36.978 [2024-10-16 07:12:36.250783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.978 [2024-10-16 07:12:36.250829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.978 [2024-10-16 07:12:36.250847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.978 [2024-10-16 07:12:36.250854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.978 [2024-10-16 07:12:36.250861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.978 [2024-10-16 07:12:36.250878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.978 qpair failed and we were unable to recover it. 00:29:36.978 [2024-10-16 07:12:36.260685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.978 [2024-10-16 07:12:36.260732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.978 [2024-10-16 07:12:36.260746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.978 [2024-10-16 07:12:36.260754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.978 [2024-10-16 07:12:36.260760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.978 [2024-10-16 07:12:36.260774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.978 qpair failed and we were unable to recover it. 00:29:36.978 [2024-10-16 07:12:36.270941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.978 [2024-10-16 07:12:36.271011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.978 [2024-10-16 07:12:36.271025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.978 [2024-10-16 07:12:36.271033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.978 [2024-10-16 07:12:36.271039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.978 [2024-10-16 07:12:36.271053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.978 qpair failed and we were unable to recover it. 00:29:36.978 [2024-10-16 07:12:36.280887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.978 [2024-10-16 07:12:36.280941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.978 [2024-10-16 07:12:36.280955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.978 [2024-10-16 07:12:36.280962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.978 [2024-10-16 07:12:36.280969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.978 [2024-10-16 07:12:36.280982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.978 qpair failed and we were unable to recover it. 00:29:36.978 [2024-10-16 07:12:36.290851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.978 [2024-10-16 07:12:36.290898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.978 [2024-10-16 07:12:36.290912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.978 [2024-10-16 07:12:36.290919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.290926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.290940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.300919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.300964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.300981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.300988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.300994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.301008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.310954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.311019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.311033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.311040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.311046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.311060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.320859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.320909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.320922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.320929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.320936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.320949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.330998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.331047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.331061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.331068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.331074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.331087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.341017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.341065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.341078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.341085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.341094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.341108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.351092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.351180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.351193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.351201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.351208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.351222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.361094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.361145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.361158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.361165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.361171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.361185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.371084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.371128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.371142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.371149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.371155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.371169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.381125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.381168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.381181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.381188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.381194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.381208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.391072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.391129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.391142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.391149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.391155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.391169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.401163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.401212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.401226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.401233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.401239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.401253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.411172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.411217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.411231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.411238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.411244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.411258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.421194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.421240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.421253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.421260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.421267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.979 [2024-10-16 07:12:36.421280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.979 qpair failed and we were unable to recover it. 00:29:36.979 [2024-10-16 07:12:36.431295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.979 [2024-10-16 07:12:36.431348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.979 [2024-10-16 07:12:36.431362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.979 [2024-10-16 07:12:36.431369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.979 [2024-10-16 07:12:36.431378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.980 [2024-10-16 07:12:36.431392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.980 qpair failed and we were unable to recover it. 00:29:36.980 [2024-10-16 07:12:36.441304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.980 [2024-10-16 07:12:36.441353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.980 [2024-10-16 07:12:36.441367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.980 [2024-10-16 07:12:36.441374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.980 [2024-10-16 07:12:36.441380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.980 [2024-10-16 07:12:36.441394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.980 qpair failed and we were unable to recover it. 00:29:36.980 [2024-10-16 07:12:36.451194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.980 [2024-10-16 07:12:36.451244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.980 [2024-10-16 07:12:36.451257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.980 [2024-10-16 07:12:36.451264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.980 [2024-10-16 07:12:36.451270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.980 [2024-10-16 07:12:36.451284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.980 qpair failed and we were unable to recover it. 00:29:36.980 [2024-10-16 07:12:36.461348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.980 [2024-10-16 07:12:36.461397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.980 [2024-10-16 07:12:36.461410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.980 [2024-10-16 07:12:36.461417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.980 [2024-10-16 07:12:36.461424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.980 [2024-10-16 07:12:36.461438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.980 qpair failed and we were unable to recover it. 00:29:36.980 [2024-10-16 07:12:36.471451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.980 [2024-10-16 07:12:36.471505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.980 [2024-10-16 07:12:36.471519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.980 [2024-10-16 07:12:36.471526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.980 [2024-10-16 07:12:36.471532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:36.980 [2024-10-16 07:12:36.471545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.980 qpair failed and we were unable to recover it. 00:29:37.283 [2024-10-16 07:12:36.481412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.283 [2024-10-16 07:12:36.481498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.283 [2024-10-16 07:12:36.481511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.283 [2024-10-16 07:12:36.481519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.283 [2024-10-16 07:12:36.481525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.283 [2024-10-16 07:12:36.481539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.283 qpair failed and we were unable to recover it. 00:29:37.283 [2024-10-16 07:12:36.491418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.283 [2024-10-16 07:12:36.491466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.283 [2024-10-16 07:12:36.491480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.283 [2024-10-16 07:12:36.491487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.283 [2024-10-16 07:12:36.491493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.283 [2024-10-16 07:12:36.491507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.283 qpair failed and we were unable to recover it. 00:29:37.283 [2024-10-16 07:12:36.501338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.283 [2024-10-16 07:12:36.501389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.283 [2024-10-16 07:12:36.501403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.283 [2024-10-16 07:12:36.501410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.283 [2024-10-16 07:12:36.501416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.283 [2024-10-16 07:12:36.501430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.283 qpair failed and we were unable to recover it. 00:29:37.283 [2024-10-16 07:12:36.511515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.283 [2024-10-16 07:12:36.511568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.283 [2024-10-16 07:12:36.511582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.283 [2024-10-16 07:12:36.511589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.283 [2024-10-16 07:12:36.511595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.283 [2024-10-16 07:12:36.511609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.283 qpair failed and we were unable to recover it. 00:29:37.283 [2024-10-16 07:12:36.521519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.283 [2024-10-16 07:12:36.521564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.283 [2024-10-16 07:12:36.521578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.283 [2024-10-16 07:12:36.521588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.521595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.521609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.531406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.531454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.531469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.531476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.531483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.531497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.541532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.541577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.541591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.541598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.541604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.541618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.551613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.551671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.551696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.551705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.551712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.551731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.561591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.561639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.561654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.561662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.561668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.561684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.571621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.571667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.571681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.571689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.571695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.571709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.581674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.581718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.581732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.581739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.581745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.581759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.591732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.591787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.591801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.591808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.591814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.591828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.601741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.601789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.601803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.601810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.601817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.601831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.611738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.611829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.611847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.611859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.611865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.611879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.621814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.621866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.621880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.621887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.621893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.621907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.631851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.631908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.631922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.631929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.631936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.631950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.641854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.641908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.641922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.641929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.641935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.641949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.651863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.651912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.651926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.651932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.651939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.284 [2024-10-16 07:12:36.651953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-10-16 07:12:36.661873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.284 [2024-10-16 07:12:36.661922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.284 [2024-10-16 07:12:36.661936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.284 [2024-10-16 07:12:36.661943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.284 [2024-10-16 07:12:36.661949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.285 [2024-10-16 07:12:36.661964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-10-16 07:12:36.671957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.285 [2024-10-16 07:12:36.672011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.285 [2024-10-16 07:12:36.672025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.285 [2024-10-16 07:12:36.672032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.285 [2024-10-16 07:12:36.672038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.285 [2024-10-16 07:12:36.672052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-10-16 07:12:36.681939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.285 [2024-10-16 07:12:36.681995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.285 [2024-10-16 07:12:36.682009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.285 [2024-10-16 07:12:36.682016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.285 [2024-10-16 07:12:36.682022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f64dc000b90 00:29:37.285 [2024-10-16 07:12:36.682036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-10-16 07:12:36.682192] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:37.285 A controller has encountered a failure and is being reset. 00:29:37.285 [2024-10-16 07:12:36.682313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1708ed0 (9): Bad file descriptor 00:29:37.285 Controller properly reset. 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Read completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 Write completed with error (sct=0, sc=8) 00:29:37.285 starting I/O failed 00:29:37.285 [2024-10-16 07:12:36.743106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.285 Initializing NVMe Controllers 00:29:37.285 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:37.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:37.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:37.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:37.285 Initialization complete. Launching workers. 00:29:37.285 Starting thread on core 1 00:29:37.285 Starting thread on core 2 00:29:37.285 Starting thread on core 3 00:29:37.285 Starting thread on core 0 00:29:37.285 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:37.562 00:29:37.562 real 0m11.402s 00:29:37.562 user 0m21.984s 00:29:37.562 sys 0m3.861s 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.562 ************************************ 00:29:37.562 END TEST nvmf_target_disconnect_tc2 00:29:37.562 ************************************ 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.562 rmmod nvme_tcp 00:29:37.562 rmmod nvme_fabrics 00:29:37.562 rmmod nvme_keyring 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.562 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 3313759 ']' 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 3313759 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3313759 ']' 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3313759 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3313759 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3313759' 00:29:37.563 killing process with pid 3313759 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3313759 00:29:37.563 07:12:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3313759 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.883 07:12:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.799 07:12:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.799 00:29:39.799 real 0m21.789s 00:29:39.799 user 0m49.718s 00:29:39.799 sys 0m9.988s 00:29:39.799 07:12:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.799 07:12:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:39.799 ************************************ 00:29:39.799 END TEST nvmf_target_disconnect 00:29:39.799 ************************************ 00:29:39.799 07:12:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:39.799 00:29:39.799 real 6m32.015s 00:29:39.799 user 11m21.551s 00:29:39.799 sys 2m15.223s 00:29:39.799 07:12:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.799 07:12:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.799 ************************************ 00:29:39.799 END TEST nvmf_host 00:29:39.799 ************************************ 00:29:39.799 07:12:39 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:39.799 07:12:39 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:39.799 07:12:39 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:39.799 07:12:39 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:39.799 07:12:39 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.799 07:12:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.062 ************************************ 00:29:40.062 START TEST nvmf_target_core_interrupt_mode 00:29:40.062 ************************************ 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:40.062 * Looking for test storage... 00:29:40.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.062 --rc genhtml_branch_coverage=1 00:29:40.062 --rc genhtml_function_coverage=1 00:29:40.062 --rc genhtml_legend=1 00:29:40.062 --rc geninfo_all_blocks=1 00:29:40.062 --rc geninfo_unexecuted_blocks=1 00:29:40.062 00:29:40.062 ' 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.062 --rc genhtml_branch_coverage=1 00:29:40.062 --rc genhtml_function_coverage=1 00:29:40.062 --rc genhtml_legend=1 00:29:40.062 --rc geninfo_all_blocks=1 00:29:40.062 --rc geninfo_unexecuted_blocks=1 00:29:40.062 00:29:40.062 ' 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.062 --rc genhtml_branch_coverage=1 00:29:40.062 --rc genhtml_function_coverage=1 00:29:40.062 --rc genhtml_legend=1 00:29:40.062 --rc geninfo_all_blocks=1 00:29:40.062 --rc geninfo_unexecuted_blocks=1 00:29:40.062 00:29:40.062 ' 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.062 --rc genhtml_branch_coverage=1 00:29:40.062 --rc genhtml_function_coverage=1 00:29:40.062 --rc genhtml_legend=1 00:29:40.062 --rc geninfo_all_blocks=1 00:29:40.062 --rc geninfo_unexecuted_blocks=1 00:29:40.062 00:29:40.062 ' 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.062 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:40.063 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:40.325 ************************************ 00:29:40.325 START TEST nvmf_abort 00:29:40.325 ************************************ 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:40.325 * Looking for test storage... 00:29:40.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:40.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.325 --rc genhtml_branch_coverage=1 00:29:40.325 --rc genhtml_function_coverage=1 00:29:40.325 --rc genhtml_legend=1 00:29:40.325 --rc geninfo_all_blocks=1 00:29:40.325 --rc geninfo_unexecuted_blocks=1 00:29:40.325 00:29:40.325 ' 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:40.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.325 --rc genhtml_branch_coverage=1 00:29:40.325 --rc genhtml_function_coverage=1 00:29:40.325 --rc genhtml_legend=1 00:29:40.325 --rc geninfo_all_blocks=1 00:29:40.325 --rc geninfo_unexecuted_blocks=1 00:29:40.325 00:29:40.325 ' 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:40.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.325 --rc genhtml_branch_coverage=1 00:29:40.325 --rc genhtml_function_coverage=1 00:29:40.325 --rc genhtml_legend=1 00:29:40.325 --rc geninfo_all_blocks=1 00:29:40.325 --rc geninfo_unexecuted_blocks=1 00:29:40.325 00:29:40.325 ' 00:29:40.325 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:40.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.325 --rc genhtml_branch_coverage=1 00:29:40.325 --rc genhtml_function_coverage=1 00:29:40.325 --rc genhtml_legend=1 00:29:40.325 --rc geninfo_all_blocks=1 00:29:40.325 --rc geninfo_unexecuted_blocks=1 00:29:40.325 00:29:40.325 ' 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:40.326 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:40.587 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.587 07:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:48.736 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:48.736 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.736 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:48.737 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:48.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.737 07:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:29:48.737 00:29:48.737 --- 10.0.0.2 ping statistics --- 00:29:48.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.737 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:29:48.737 00:29:48.737 --- 10.0.0.1 ping statistics --- 00:29:48.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.737 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3319196 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3319196 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3319196 ']' 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:48.737 07:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.737 [2024-10-16 07:12:47.410744] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:48.737 [2024-10-16 07:12:47.411872] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:29:48.737 [2024-10-16 07:12:47.411921] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.737 [2024-10-16 07:12:47.503971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:48.737 [2024-10-16 07:12:47.555049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.737 [2024-10-16 07:12:47.555103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.737 [2024-10-16 07:12:47.555111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.737 [2024-10-16 07:12:47.555118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.737 [2024-10-16 07:12:47.555125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.737 [2024-10-16 07:12:47.557211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.737 [2024-10-16 07:12:47.557378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.737 [2024-10-16 07:12:47.557377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.737 [2024-10-16 07:12:47.633811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:48.737 [2024-10-16 07:12:47.634806] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:48.737 [2024-10-16 07:12:47.635159] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:48.737 [2024-10-16 07:12:47.635375] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 [2024-10-16 07:12:48.294299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 Malloc0 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 Delay0 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.999 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:49.000 [2024-10-16 07:12:48.398295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.000 07:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:49.261 [2024-10-16 07:12:48.571034] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:51.179 Initializing NVMe Controllers 00:29:51.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:51.179 controller IO queue size 128 less than required 00:29:51.179 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:51.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:51.179 Initialization complete. Launching workers. 00:29:51.179 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28605 00:29:51.179 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28666, failed to submit 66 00:29:51.179 success 28605, unsuccessful 61, failed 0 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.179 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.441 rmmod nvme_tcp 00:29:51.441 rmmod nvme_fabrics 00:29:51.441 rmmod nvme_keyring 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3319196 ']' 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3319196 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3319196 ']' 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3319196 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3319196 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3319196' 00:29:51.441 killing process with pid 3319196 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3319196 00:29:51.441 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3319196 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.703 07:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.618 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.618 00:29:53.618 real 0m13.487s 00:29:53.618 user 0m11.023s 00:29:53.618 sys 0m7.080s 00:29:53.618 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.618 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:53.618 ************************************ 00:29:53.618 END TEST nvmf_abort 00:29:53.618 ************************************ 00:29:53.618 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:53.618 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:53.618 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.618 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:53.880 ************************************ 00:29:53.880 START TEST nvmf_ns_hotplug_stress 00:29:53.880 ************************************ 00:29:53.880 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:53.880 * Looking for test storage... 00:29:53.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:53.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.881 --rc genhtml_branch_coverage=1 00:29:53.881 --rc genhtml_function_coverage=1 00:29:53.881 --rc genhtml_legend=1 00:29:53.881 --rc geninfo_all_blocks=1 00:29:53.881 --rc geninfo_unexecuted_blocks=1 00:29:53.881 00:29:53.881 ' 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:53.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.881 --rc genhtml_branch_coverage=1 00:29:53.881 --rc genhtml_function_coverage=1 00:29:53.881 --rc genhtml_legend=1 00:29:53.881 --rc geninfo_all_blocks=1 00:29:53.881 --rc geninfo_unexecuted_blocks=1 00:29:53.881 00:29:53.881 ' 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:53.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.881 --rc genhtml_branch_coverage=1 00:29:53.881 --rc genhtml_function_coverage=1 00:29:53.881 --rc genhtml_legend=1 00:29:53.881 --rc geninfo_all_blocks=1 00:29:53.881 --rc geninfo_unexecuted_blocks=1 00:29:53.881 00:29:53.881 ' 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:53.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.881 --rc genhtml_branch_coverage=1 00:29:53.881 --rc genhtml_function_coverage=1 00:29:53.881 --rc genhtml_legend=1 00:29:53.881 --rc geninfo_all_blocks=1 00:29:53.881 --rc geninfo_unexecuted_blocks=1 00:29:53.881 00:29:53.881 ' 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.881 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.143 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.143 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.143 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.143 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.143 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.143 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.143 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.144 07:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:02.290 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:02.290 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:02.290 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:02.290 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:02.290 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:02.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:30:02.291 00:30:02.291 --- 10.0.0.2 ping statistics --- 00:30:02.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.291 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:02.291 00:30:02.291 --- 10.0.0.1 ping statistics --- 00:30:02.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.291 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3324082 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3324082 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3324082 ']' 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:02.291 07:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:02.291 [2024-10-16 07:13:00.979470] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:02.291 [2024-10-16 07:13:00.980601] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:30:02.291 [2024-10-16 07:13:00.980654] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.291 [2024-10-16 07:13:01.071510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:02.291 [2024-10-16 07:13:01.123623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.291 [2024-10-16 07:13:01.123679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.291 [2024-10-16 07:13:01.123693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.291 [2024-10-16 07:13:01.123700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.291 [2024-10-16 07:13:01.123706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.291 [2024-10-16 07:13:01.125795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.291 [2024-10-16 07:13:01.125938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.291 [2024-10-16 07:13:01.125939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.291 [2024-10-16 07:13:01.203425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:02.291 [2024-10-16 07:13:01.204474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:02.291 [2024-10-16 07:13:01.205219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:02.291 [2024-10-16 07:13:01.205219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:02.554 07:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:02.554 07:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:30:02.554 07:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:02.554 07:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:02.554 07:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:02.554 07:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.554 07:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:02.554 07:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:02.554 [2024-10-16 07:13:02.026959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.815 07:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:02.815 07:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.076 [2024-10-16 07:13:02.423759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.076 07:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.337 07:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:03.337 Malloc0 00:30:03.337 07:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:03.599 Delay0 00:30:03.599 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.861 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:04.122 NULL1 00:30:04.122 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:04.122 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3324575 00:30:04.122 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:04.122 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:04.122 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.382 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.644 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:04.644 07:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:04.905 true 00:30:04.905 07:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:04.905 07:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.905 07:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.167 07:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:05.167 07:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:05.428 true 00:30:05.428 07:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:05.428 07:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.694 Read completed with error (sct=0, sc=11) 00:30:05.694 07:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.694 [2024-10-16 07:13:05.159550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.159641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.159681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.159729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.159802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.159858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.159906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.159946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.159992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.160963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.161986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.162049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.694 [2024-10-16 07:13:05.162098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.162960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.163958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.164008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.164048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.164090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.164138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.164187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.164228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.164273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.164939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.164995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.165977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.166973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.167021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.167063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.167106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.167158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.695 [2024-10-16 07:13:05.167227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.167961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:05.696 [2024-10-16 07:13:05.168790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.168988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.169956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.170613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.170665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.170702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.170744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.170786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.170830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.170881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.170917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.170958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.696 [2024-10-16 07:13:05.171756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.171804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.171857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.171908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.171955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.172999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.173951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.174997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.175043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.175089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.175136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.175853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.175904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.175954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.176003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.176050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.176098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.176142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.176198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.697 [2024-10-16 07:13:05.176237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.176971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.177954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.178922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.179960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.180604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.698 [2024-10-16 07:13:05.181861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.181944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.181989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.182982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.183973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.184990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.185965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.186012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.186071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.186123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.186167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.186910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.186971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.187021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.187069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.187115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.187168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.187213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.699 [2024-10-16 07:13:05.187255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.187961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.188970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.700 [2024-10-16 07:13:05.189497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.189989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.190985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.191818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.192624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.192676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.192724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.192766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.192811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.192880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.192933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.989 [2024-10-16 07:13:05.192978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.193950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.194966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.195992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 07:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:05.990 [2024-10-16 07:13:05.196249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 07:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:05.990 [2024-10-16 07:13:05.196589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.196978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.197031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.197093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.197140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.197189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.197237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.197281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.197336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.197388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.990 [2024-10-16 07:13:05.198719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.198768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.198809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.198863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.198904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.198947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.198992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.199967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.200979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.201984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.202921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.203577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.203628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.203678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.991 [2024-10-16 07:13:05.203722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.203811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.203866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.203914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.203972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.204964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.205956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.206903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.207959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.208507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.209181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.209235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.209287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.209333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.209384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.992 [2024-10-16 07:13:05.209429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.209957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.210997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.211972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.212995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.213966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.214013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.214057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.214103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.214151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.214192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.993 [2024-10-16 07:13:05.214235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.214924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.214981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.215974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.216947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.217992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.218970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.219029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.219075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.994 [2024-10-16 07:13:05.219126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.219708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.220960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.221986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.222961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.223958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.224963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.995 [2024-10-16 07:13:05.225000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.225050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.225094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.225143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.225188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.225237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.225290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.225333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.226975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.227991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.228986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 Message suppressed 999 times: [2024-10-16 07:13:05.229577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 Read completed with error (sct=0, sc=15) 00:30:05.996 [2024-10-16 07:13:05.229629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.229964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.230962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.231009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.231689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.231738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.996 [2024-10-16 07:13:05.231799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.231858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.231907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.231962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.232994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.233979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.234820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.235976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.236023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.236067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.236114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.236160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.236200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.236834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.236896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.236936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.236981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.997 [2024-10-16 07:13:05.237569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.237604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.237648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.237698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.237736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.237784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.237828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.237883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.237926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.237990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.238967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.239875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.240973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.241927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.242578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.242632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.242685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.242730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.242806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.998 [2024-10-16 07:13:05.242859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.242910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.242954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.242997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.243967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.244988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.245953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.246996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.247039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.247084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.247128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.247167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:05.999 [2024-10-16 07:13:05.248821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.248879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.248929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.249962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.250983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.251992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.252988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.253032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.253095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.253138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.253206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.253250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.253298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.253347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.254007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.254059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.254101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.254152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.254211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.254256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.254299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.254344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.000 [2024-10-16 07:13:05.254381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.254976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.255997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.256979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.257964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.258414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.259961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.260007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.260053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.001 [2024-10-16 07:13:05.260099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.260960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.261956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.262987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.263961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.264004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.264047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.264094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.264138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.264194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.264236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.264292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.264340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.265028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.265097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.265140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.265188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.265248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.002 [2024-10-16 07:13:05.265295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.265965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.266992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.267961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.268953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.003 [2024-10-16 07:13:05.269537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.269590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.270964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.271962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.272961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.273974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.004 [2024-10-16 07:13:05.274621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.274663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.274712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.274760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.274807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.274860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.274905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.274952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.274998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.275042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.275087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.275131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.275174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.275229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.275276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.275324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.275368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.275411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.276977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.277979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.278963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.279014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.279058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.279107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.279156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.279376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.005 [2024-10-16 07:13:05.279427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.279992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.280618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.281988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.282964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.283944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.284965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.285009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.006 [2024-10-16 07:13:05.285057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:06.007 [2024-10-16 07:13:05.285561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.285960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.286432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.287968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.288998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.289977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.290025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.290071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.007 [2024-10-16 07:13:05.290119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.290960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.291560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.292972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.293960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.294982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.295992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.296050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.296105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.008 [2024-10-16 07:13:05.296151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.296962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.297005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.297047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.297102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.297161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.297226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.297269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.297312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.297363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.298999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.299976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.300995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.301986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.302031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.302082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.009 [2024-10-16 07:13:05.302131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.302637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.303952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.304959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.305974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.306984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.010 [2024-10-16 07:13:05.307898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.307945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.307991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.308041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.308105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.308155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.308212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.308257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.308305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.308381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.309957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.310975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.311989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.312998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.313789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.314468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.314535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.314592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.314638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.011 [2024-10-16 07:13:05.314685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.314767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.314810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.314868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.314930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.314978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.315956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.316994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.317983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.318959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.012 [2024-10-16 07:13:05.319562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.319612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.319657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.320953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.321985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.322971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.323956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.324973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.013 [2024-10-16 07:13:05.325900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.325943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.325985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.326965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.327161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.327236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.327288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.328986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.329990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.330982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.331995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.332046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.332090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.332142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.332190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.014 [2024-10-16 07:13:05.332241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.332992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.333970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.334964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.335698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.335752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.335792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.335850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.335897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.335944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.335992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.336979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.337950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.338011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.338059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.338104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.338157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.338203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.338257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.338337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.015 [2024-10-16 07:13:05.338393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.338865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.339997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.340645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.341979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.342947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.016 [2024-10-16 07:13:05.343762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.343813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.343868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.343915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.343966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.344971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:06.017 [2024-10-16 07:13:05.345177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.345961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.346011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.346054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.346107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.346155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.346203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.346258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.346306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.346354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.347953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.348991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.017 [2024-10-16 07:13:05.349814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.349873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.349938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.349988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.350971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.351958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.352009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.352682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.352745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.352793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.352856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.352902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.352947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.352991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.353995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.354955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.018 [2024-10-16 07:13:05.355933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.355983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.356989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.357803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.358577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.358629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.358676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.358735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.358779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.358823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.358882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.358934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.358988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.359964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.360990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.361860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.362062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.362110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.362158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.362202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.362250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.019 [2024-10-16 07:13:05.362305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.362960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.363458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.364988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.365964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.366641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.367916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.020 [2024-10-16 07:13:05.368005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.368942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 true 00:30:06.021 [2024-10-16 07:13:05.369003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.369961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.370777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.371991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.372972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.373022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.373072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.373122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.373172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.373215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.373286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.021 [2024-10-16 07:13:05.373331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.373984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.374030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.374080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.374124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.374167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.374212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.374397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.374451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.374508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.375970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.376994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.377963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.378960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.022 [2024-10-16 07:13:05.379489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.379991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.380964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.381729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.382977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.383985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.384993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.385035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.385080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.385137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.385189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.385239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.385284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.385331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.023 [2024-10-16 07:13:05.385375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.385428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.385476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.385523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.385733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.385785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.385835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.385884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.385935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.385983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.386944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.387946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.388996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.389969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.024 [2024-10-16 07:13:05.390781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.390825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.390889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.390939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.390987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.391961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.392666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.393961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.394966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.395949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.025 [2024-10-16 07:13:05.396449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.396674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.396720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.396775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.396821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.396874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.396961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.397007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.397060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.397108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.397150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.397197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.397243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.397288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 07:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:06.026 [2024-10-16 07:13:05.398449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 07:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.026 [2024-10-16 07:13:05.398874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.398972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.399992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.400994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.401994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:06.026 [2024-10-16 07:13:05.402063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.402116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.402169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.026 [2024-10-16 07:13:05.402216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.402996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.403635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.404976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.405987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.406988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.027 [2024-10-16 07:13:05.407723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.407768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.407823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.407875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.407935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.407977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.408028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.408076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.408796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.408859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.408906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.408964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.409970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.410979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.411889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.412998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.028 [2024-10-16 07:13:05.413493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.413990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.414592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.415961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.416958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.417964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.029 [2024-10-16 07:13:05.418798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.418866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.418913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.418952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.418999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.419046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.419098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.419145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.419967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.420995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.421991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.422949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.423988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.030 [2024-10-16 07:13:05.424534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.424586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.424635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.424709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.424758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.424807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.424887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.424934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.424980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.425983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.426028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.426637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.426706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.426766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.426820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.426869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.426914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.426974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.427993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.428951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.031 [2024-10-16 07:13:05.429420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.429466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.429512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.429566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.429617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.429663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.429712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.429760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.429984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.430638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.431949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.432979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.433996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.434988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.032 [2024-10-16 07:13:05.435034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.435942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.436969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.437032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.437081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.437130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.437175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.437219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.437265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.437988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.438978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.439979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.033 [2024-10-16 07:13:05.440775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.440829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.440894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.440946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.440993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.441991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.442045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.442796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.442862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.442908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.442958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.443999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.444983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.445946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.446980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.447024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.447072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.447132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.447183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.034 [2024-10-16 07:13:05.447229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.447995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.448554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.449996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.450988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.451967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.452990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.453035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.453090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.453136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.453181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.453217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.453923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.453980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.454024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.454070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.454121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.454166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.035 [2024-10-16 07:13:05.454215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.454998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.455994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.456985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.457028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.457074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.457285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.457337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.457382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.457834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.457898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.457945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.457988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.458990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.459969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.460013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.460058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.460102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.460148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.460205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.460250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.460305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.036 [2024-10-16 07:13:05.460351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.037 [2024-10-16 07:13:05.460396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.037 [2024-10-16 07:13:05.460441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.037 [2024-10-16 07:13:05.460489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.037 [2024-10-16 07:13:05.460593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.037 [2024-10-16 07:13:05.460643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.037 [2024-10-16 07:13:05.460691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.037 [2024-10-16 07:13:05.460735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.037 [2024-10-16 07:13:05.460784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.460829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:06.309 [2024-10-16 07:13:05.461618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.461964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.309 [2024-10-16 07:13:05.462972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.463985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.464060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.464108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.464157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.464203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.464252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.464305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.464507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.464556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.464637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.465977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.466993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.467987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.468040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.468087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.468131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.310 [2024-10-16 07:13:05.468181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.468990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.469047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.469088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.469136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.469188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.469233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.469285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.469895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.469950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.470995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.471978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.472966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.311 [2024-10-16 07:13:05.473710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.473756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.473804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.473870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.473918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.473973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.474984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.475689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.476958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.477994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.312 [2024-10-16 07:13:05.478839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.478927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.478973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.479972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.480020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.480075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.480112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.480165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.480213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.480256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.480305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.481981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.482991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.483986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.484946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.485020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.485070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.313 [2024-10-16 07:13:05.485120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.485994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.486756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.487950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.488997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.489971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.490016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.490070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.490143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.490198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.490244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.490288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.490338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.314 [2024-10-16 07:13:05.490381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.490429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.490477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.490756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.490801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.490851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.490905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.490943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.490990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.491032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.491077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.491141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.491183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.491230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.491277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.491326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.492990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.493965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.494979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.495023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.495081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.495135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.495179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.495227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.495281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.495487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.495535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.495582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.496989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.315 [2024-10-16 07:13:05.497481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.497531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.497595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.497639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.497696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.497764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.497811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.497872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.497920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.497964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.498981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.499997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.500993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.501972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.502991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.316 [2024-10-16 07:13:05.503681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.503731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.503946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.503997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.504987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.505985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.506987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.507985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.508987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.509056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.509100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.509148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.509212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.509260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.509313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.509364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.509409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.509464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.317 [2024-10-16 07:13:05.510868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.510923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.510971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.511996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.512966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.513969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.514979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.515019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.515069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.515117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.515159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.515974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.318 [2024-10-16 07:13:05.516783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.516830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.516892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.516941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.516987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.517995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.518996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.519954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:06.319 [2024-10-16 07:13:05.520001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.520946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.521615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.521700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.521751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.521794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.521852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.521898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.521944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.521991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.522991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.523038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.523092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.523142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.523194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.319 [2024-10-16 07:13:05.523244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.523963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.524873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.525985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.526739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.527962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.528967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.320 [2024-10-16 07:13:05.529869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.529922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.529975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.530997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.531973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.532024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.532075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.532144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.532195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.532243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.532308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.532358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.532407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.532468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.533962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.534998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.535969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.321 [2024-10-16 07:13:05.536663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.536711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.536762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.536811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.536875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.536930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.536983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.537949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.538001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.538047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.538097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.538179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.538230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.538891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.538946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.539989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.540969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.541944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.542017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.542071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.542121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.542321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.542387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.322 [2024-10-16 07:13:05.542436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.542987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.543906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.544612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.544672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.544722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.544776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.544821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.544882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.544931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.544988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.545962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.546984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.547832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.323 [2024-10-16 07:13:05.548886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.548935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.548980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.549652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.550976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.551985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.552954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.553961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.324 [2024-10-16 07:13:05.554954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.555005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.555058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.555103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.555151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.555193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.555244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.555293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.555963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.556957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.557960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.558948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.559993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.560955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.561004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.561051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.325 [2024-10-16 07:13:05.561115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.561777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.561829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.561913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.561962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.562964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.563964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.564937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.565157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.565203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.565254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.565299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.565347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.565394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.565444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.565490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.326 [2024-10-16 07:13:05.565540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.565588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.565645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.565691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.565738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.565788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.565838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.565892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.565939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.565983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.566561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.567976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.568954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.569969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.570993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.327 [2024-10-16 07:13:05.571035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.571962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.572010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.572055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.572110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.572160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 [2024-10-16 07:13:05.572209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.328 07:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:06.608 [2024-10-16 07:13:05.802453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.802978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.803985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.804653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.805975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.806016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.806058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.806104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.806160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.806203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.806243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.608 [2024-10-16 07:13:05.806283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.806989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.807968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.808507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.809962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.810971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.609 [2024-10-16 07:13:05.811839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.811888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.811928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.811970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.812991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.813988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.814978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.815022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.815111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.815155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.815215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.815872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.815930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.815973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.816950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.817006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.817052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.817093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.817127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.817170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.817209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.817256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.610 [2024-10-16 07:13:05.817301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.817970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.818975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.819937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.820984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.821965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.822016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.611 [2024-10-16 07:13:05.822060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.822977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.823983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.824965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.612 [2024-10-16 07:13:05.825945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.825990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.826036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.826077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.826258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.826295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.826334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.826374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.827981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.828962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.829977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.830993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.831035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.831077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.613 [2024-10-16 07:13:05.831118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.831966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.832840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.833985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.834987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.835959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.614 [2024-10-16 07:13:05.836596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.836634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.836672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.836712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.836753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.836792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.836833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.836885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 Message suppressed 999 times: [2024-10-16 07:13:05.836925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 Read completed with error (sct=0, sc=15) 00:30:06.615 [2024-10-16 07:13:05.836970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 07:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:06.615 [2024-10-16 07:13:05.837340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 07:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:06.615 [2024-10-16 07:13:05.837696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.837987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.838961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.839003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.839047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.839237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.839278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.839320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.839359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.840988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.615 [2024-10-16 07:13:05.841622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.841664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.841707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.841747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.841793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.841834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.841882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.841923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.841966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.842966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.843994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.844964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.845997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.616 [2024-10-16 07:13:05.846662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.846700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.846738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.846780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.846821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.846868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.846907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.846947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.846987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.847978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.848971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.849602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.617 [2024-10-16 07:13:05.850987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.851965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.852976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.853959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.854981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.618 [2024-10-16 07:13:05.855587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.855628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.856957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.857970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.858955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.859993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.619 [2024-10-16 07:13:05.860908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.860956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.860996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.861992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.862042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.862086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.862713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.862759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.862804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.862853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.862904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.862944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.862986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.863968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.864963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.865973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.866014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.866054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.866147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.866188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.620 [2024-10-16 07:13:05.866233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.866985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.867511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.868974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.869992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.621 [2024-10-16 07:13:05.870594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.870637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.870684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.870724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.870768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.870811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.871671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.872968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.873980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.874984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.875987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.876029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.876069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.622 [2024-10-16 07:13:05.876108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.876982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.877963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.878020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.878829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.878880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.878921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.878980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.879959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.880984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.881035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.881084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.881137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.881182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.881221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.881266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.881320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.881363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.623 [2024-10-16 07:13:05.881400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.881452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.881495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.881539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.881581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.881624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.881852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.881898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.881940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.881982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.882976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.883985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.884480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.885981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.624 [2024-10-16 07:13:05.886900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.886949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.886986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.887984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:06.625 [2024-10-16 07:13:05.888789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.888959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.889985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.890963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.891004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.625 [2024-10-16 07:13:05.891043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.891084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.891906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.891996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.892987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.893958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.894985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.895964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.626 [2024-10-16 07:13:05.896776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.896810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.896850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.896885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.896918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.896952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.896986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.897982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.898990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.899995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.900954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.627 [2024-10-16 07:13:05.901709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.901750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.901804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.901850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.901892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.901934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.901976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.902974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.903994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.904532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.905964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.906005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.906046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.906082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.906125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.906183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.906224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.628 [2024-10-16 07:13:05.906285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.906991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.907816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.908685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.909972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.910990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.629 [2024-10-16 07:13:05.911425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.911995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.912974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.913929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.914967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.630 [2024-10-16 07:13:05.915628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.915668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.915709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.915747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.915788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.915828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.915875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.915920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.915959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.916988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.917964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.918983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.919978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.631 [2024-10-16 07:13:05.920797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.920840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.920890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.920928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.920965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.921966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.922980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.923543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.924984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.632 [2024-10-16 07:13:05.925442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.925996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.926997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.927622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.928982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.929964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.633 [2024-10-16 07:13:05.930591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.930635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.930674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.930719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.930760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.930808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.930855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.930901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.930943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.931970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.932952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.933962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.634 [2024-10-16 07:13:05.934879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.934921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.934960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.935973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.936988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.937027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.937069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.937696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.937740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.937784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.937828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.937878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.937923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.937963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:06.635 [2024-10-16 07:13:05.938001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.938971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.635 [2024-10-16 07:13:05.939629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.939670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.939930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.939973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.940947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.941996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.942999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.943513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.944976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.945018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.636 [2024-10-16 07:13:05.945057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.945973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.946836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.947989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.948971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.949013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.949542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.949586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.949632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.949673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.949715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.637 [2024-10-16 07:13:05.949759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.949796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.949839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.949884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.949929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.949969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.950995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.951979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.952983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.953024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.953065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.953106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.953150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.953192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.953811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.953863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.953915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.953957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.638 [2024-10-16 07:13:05.954818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.954865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.954909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.954955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.955969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.956967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.957962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.958003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.958045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.958089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.958127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.958163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.958204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.639 [2024-10-16 07:13:05.958247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.958284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.958321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.958361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.958404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.958444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.958482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.958524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.958562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.958606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 [2024-10-16 07:13:05.959963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:06.640 true 00:30:06.640 07:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:06.640 07:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.580 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.840 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:07.840 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:08.099 true 00:30:08.099 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:08.099 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.099 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.359 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:08.359 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:08.620 true 00:30:08.620 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:08.620 07:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.004 07:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.004 07:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:10.004 07:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:10.004 true 00:30:10.004 07:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:10.004 07:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.945 07:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.205 07:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:11.205 07:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:11.205 true 00:30:11.205 07:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:11.205 07:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.464 07:13:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.724 07:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:11.724 07:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:11.724 true 00:30:11.724 07:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:11.724 07:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.107 07:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.107 07:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:13.107 07:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:13.367 true 00:30:13.367 07:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:13.367 07:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.309 07:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.309 07:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:14.309 07:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:14.570 true 00:30:14.570 07:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:14.570 07:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.830 07:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.090 07:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:15.090 07:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:15.090 true 00:30:15.090 07:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:15.090 07:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.351 07:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.612 07:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:15.612 07:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:15.612 true 00:30:15.612 07:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:15.612 07:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.873 07:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.133 07:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:16.133 07:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:16.133 true 00:30:16.133 07:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:16.133 07:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.394 07:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.655 07:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:16.655 07:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:16.916 true 00:30:16.916 07:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:16.916 07:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:17.861 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.861 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:17.861 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:17.861 true 00:30:18.121 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:18.121 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.121 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.382 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:18.382 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:18.643 true 00:30:18.643 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:18.643 07:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.643 07:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.904 07:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:18.904 07:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:19.165 true 00:30:19.165 07:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:19.165 07:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.426 07:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.426 07:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:19.426 07:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:19.687 true 00:30:19.687 07:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:19.687 07:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.974 07:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.974 07:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:20.974 07:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:21.252 true 00:30:21.252 07:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:21.252 07:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.196 07:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.196 07:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:22.196 07:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:22.196 true 00:30:22.457 07:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:22.457 07:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.457 07:13:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.718 07:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:22.718 07:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:22.980 true 00:30:22.980 07:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:22.980 07:13:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.366 07:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.366 07:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:24.366 07:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:24.366 true 00:30:24.367 07:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:24.367 07:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.308 07:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.570 07:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:25.570 07:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:25.570 true 00:30:25.570 07:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:25.570 07:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.832 07:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.093 07:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:26.093 07:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:26.093 true 00:30:26.093 07:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:26.093 07:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.480 07:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.480 07:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:27.480 07:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:27.741 true 00:30:27.741 07:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:27.741 07:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:28.686 07:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:28.686 07:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:28.686 07:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:28.948 true 00:30:28.948 07:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:28.948 07:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.209 07:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.209 07:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:29.209 07:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:29.469 true 00:30:29.469 07:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:29.469 07:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.730 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.991 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:29.991 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:29.991 true 00:30:29.991 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:29.991 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.251 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.512 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:30.512 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:30.512 true 00:30:30.512 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:30.512 07:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:31.895 07:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:31.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:31.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:31.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:31.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:31.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:31.895 07:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:31.895 07:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:32.156 true 00:30:32.156 07:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:32.156 07:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.100 07:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.100 07:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:33.100 07:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:33.361 true 00:30:33.361 07:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:33.361 07:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.361 07:13:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.623 07:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:33.623 07:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:33.884 true 00:30:33.884 07:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:33.884 07:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.270 Initializing NVMe Controllers 00:30:35.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.270 Controller IO queue size 128, less than required. 00:30:35.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:35.270 Controller IO queue size 128, less than required. 00:30:35.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:35.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:35.270 Initialization complete. Launching workers. 00:30:35.270 ======================================================== 00:30:35.270 Latency(us) 00:30:35.270 Device Information : IOPS MiB/s Average min max 00:30:35.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2742.29 1.34 28029.63 1283.19 1051155.82 00:30:35.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17480.08 8.54 7293.65 1303.00 401339.48 00:30:35.270 ======================================================== 00:30:35.270 Total : 20222.37 9.87 10105.59 1283.19 1051155.82 00:30:35.270 00:30:35.270 07:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.270 07:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:35.270 07:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:35.270 true 00:30:35.270 07:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3324575 00:30:35.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3324575) - No such process 00:30:35.270 07:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3324575 00:30:35.270 07:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.530 07:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:35.791 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:35.791 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:35.791 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:35.791 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:35.791 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:35.791 null0 00:30:35.791 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:35.791 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:35.791 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:36.051 null1 00:30:36.051 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.051 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.051 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:36.312 null2 00:30:36.312 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.312 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.312 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:36.312 null3 00:30:36.312 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.312 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.312 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:36.573 null4 00:30:36.573 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.573 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.573 07:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:36.573 null5 00:30:36.835 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.835 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.835 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:36.835 null6 00:30:36.835 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.835 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.835 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:37.097 null7 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:37.097 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3330890 3330893 3330895 3330898 3330900 3330903 3330906 3330909 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.098 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.360 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.622 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.622 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.622 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.622 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.622 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.622 07:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.622 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.622 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.622 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.622 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.622 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.622 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.622 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.622 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.885 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.885 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.885 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.885 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.885 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.885 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.885 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.885 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.885 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.886 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.150 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.411 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.673 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.673 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.673 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:38.673 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.673 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.673 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.673 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.673 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.673 07:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.673 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.935 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:39.195 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.456 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:39.717 07:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:39.717 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:39.978 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.239 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:40.240 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:40.240 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:40.240 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:40.500 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:40.501 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.501 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.501 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:40.501 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:40.501 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.501 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.501 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:40.501 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.501 07:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.761 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.022 rmmod nvme_tcp 00:30:41.022 rmmod nvme_fabrics 00:30:41.022 rmmod nvme_keyring 00:30:41.022 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3324082 ']' 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3324082 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3324082 ']' 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3324082 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3324082 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3324082' 00:30:41.023 killing process with pid 3324082 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3324082 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3324082 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:41.023 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:30:41.283 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.283 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.283 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.283 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.283 07:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.198 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.198 00:30:43.198 real 0m49.450s 00:30:43.198 user 2m59.612s 00:30:43.198 sys 0m21.970s 00:30:43.198 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:43.198 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:43.198 ************************************ 00:30:43.198 END TEST nvmf_ns_hotplug_stress 00:30:43.198 ************************************ 00:30:43.198 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:43.198 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:43.198 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:43.198 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:43.198 ************************************ 00:30:43.198 START TEST nvmf_delete_subsystem 00:30:43.198 ************************************ 00:30:43.198 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:43.460 * Looking for test storage... 00:30:43.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:43.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.460 --rc genhtml_branch_coverage=1 00:30:43.460 --rc genhtml_function_coverage=1 00:30:43.460 --rc genhtml_legend=1 00:30:43.460 --rc geninfo_all_blocks=1 00:30:43.460 --rc geninfo_unexecuted_blocks=1 00:30:43.460 00:30:43.460 ' 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:43.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.460 --rc genhtml_branch_coverage=1 00:30:43.460 --rc genhtml_function_coverage=1 00:30:43.460 --rc genhtml_legend=1 00:30:43.460 --rc geninfo_all_blocks=1 00:30:43.460 --rc geninfo_unexecuted_blocks=1 00:30:43.460 00:30:43.460 ' 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:43.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.460 --rc genhtml_branch_coverage=1 00:30:43.460 --rc genhtml_function_coverage=1 00:30:43.460 --rc genhtml_legend=1 00:30:43.460 --rc geninfo_all_blocks=1 00:30:43.460 --rc geninfo_unexecuted_blocks=1 00:30:43.460 00:30:43.460 ' 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:43.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.460 --rc genhtml_branch_coverage=1 00:30:43.460 --rc genhtml_function_coverage=1 00:30:43.460 --rc genhtml_legend=1 00:30:43.460 --rc geninfo_all_blocks=1 00:30:43.460 --rc geninfo_unexecuted_blocks=1 00:30:43.460 00:30:43.460 ' 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:43.460 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:43.461 07:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:51.597 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:51.598 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:51.598 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:51.598 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:51.598 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:51.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:30:51.598 00:30:51.598 --- 10.0.0.2 ping statistics --- 00:30:51.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.598 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:30:51.598 00:30:51.598 --- 10.0.0.1 ping statistics --- 00:30:51.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.598 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:51.598 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3335947 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3335947 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3335947 ']' 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:51.599 07:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.599 [2024-10-16 07:13:50.538030] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:51.599 [2024-10-16 07:13:50.539150] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:30:51.599 [2024-10-16 07:13:50.539199] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.599 [2024-10-16 07:13:50.631033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:51.599 [2024-10-16 07:13:50.683027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.599 [2024-10-16 07:13:50.683083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.599 [2024-10-16 07:13:50.683092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.599 [2024-10-16 07:13:50.683099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.599 [2024-10-16 07:13:50.683105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.599 [2024-10-16 07:13:50.684893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.599 [2024-10-16 07:13:50.684960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.599 [2024-10-16 07:13:50.761535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:51.599 [2024-10-16 07:13:50.762221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:51.599 [2024-10-16 07:13:50.762489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:52.170 [2024-10-16 07:13:51.421935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.170 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:52.171 [2024-10-16 07:13:51.454685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:52.171 NULL1 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:52.171 Delay0 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3336202 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:52.171 07:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:52.171 [2024-10-16 07:13:51.567274] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:54.084 07:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:54.084 07:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.084 07:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Write completed with error (sct=0, sc=8) 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.345 starting I/O failed: -6 00:30:54.345 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 starting I/O failed: -6 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 starting I/O failed: -6 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 [2024-10-16 07:13:53.695580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ed8000c00 is same with the state(6) to be set 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Write completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 Read completed with error (sct=0, sc=8) 00:30:54.346 [2024-10-16 07:13:53.696269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ed800d450 is same with the state(6) to be set 00:30:55.288 [2024-10-16 07:13:54.666398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7a70 is same with the state(6) to be set 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 [2024-10-16 07:13:54.697609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ed800d780 is same with the state(6) to be set 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 [2024-10-16 07:13:54.697671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ed800cfe0 is same with the state(6) to be set 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 [2024-10-16 07:13:54.698117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6930 is same with the state(6) to be set 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Write completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.288 Read completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 Write completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 Read completed with error (sct=0, sc=8) 00:30:55.289 [2024-10-16 07:13:54.699139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f6570 is same with the state(6) to be set 00:30:55.289 Initializing NVMe Controllers 00:30:55.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.289 Controller IO queue size 128, less than required. 00:30:55.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:55.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:55.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:55.289 Initialization complete. Launching workers. 00:30:55.289 ======================================================== 00:30:55.289 Latency(us) 00:30:55.289 Device Information : IOPS MiB/s Average min max 00:30:55.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.45 0.09 894730.55 481.67 1011429.14 00:30:55.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 142.71 0.07 965542.92 691.72 1044476.64 00:30:55.289 ======================================================== 00:30:55.289 Total : 333.17 0.16 925063.61 481.67 1044476.64 00:30:55.289 00:30:55.289 [2024-10-16 07:13:54.699503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f7a70 (9): Bad file descriptor 00:30:55.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:55.289 07:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.289 07:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:55.289 07:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3336202 00:30:55.289 07:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3336202 00:30:55.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3336202) - No such process 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3336202 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3336202 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3336202 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:55.861 [2024-10-16 07:13:55.234296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.861 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3336869 00:30:55.862 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:55.862 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:55.862 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3336869 00:30:55.862 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:55.862 [2024-10-16 07:13:55.321096] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:56.433 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:56.433 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3336869 00:30:56.433 07:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:57.004 07:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:57.004 07:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3336869 00:30:57.004 07:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:57.574 07:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:57.574 07:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3336869 00:30:57.574 07:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:57.835 07:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:57.835 07:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3336869 00:30:57.835 07:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:58.406 07:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:58.406 07:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3336869 00:30:58.406 07:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:58.977 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:58.977 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3336869 00:30:58.977 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:59.237 Initializing NVMe Controllers 00:30:59.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.237 Controller IO queue size 128, less than required. 00:30:59.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:59.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:59.237 Initialization complete. Launching workers. 00:30:59.237 ======================================================== 00:30:59.237 Latency(us) 00:30:59.237 Device Information : IOPS MiB/s Average min max 00:30:59.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002294.94 1000116.48 1041760.64 00:30:59.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003850.56 1000256.49 1009586.33 00:30:59.238 ======================================================== 00:30:59.238 Total : 256.00 0.12 1003072.75 1000116.48 1041760.64 00:30:59.238 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3336869 00:30:59.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3336869) - No such process 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3336869 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:59.498 rmmod nvme_tcp 00:30:59.498 rmmod nvme_fabrics 00:30:59.498 rmmod nvme_keyring 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3335947 ']' 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3335947 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3335947 ']' 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3335947 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3335947 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3335947' 00:30:59.498 killing process with pid 3335947 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3335947 00:30:59.498 07:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3335947 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.759 07:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.676 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.676 00:31:01.676 real 0m18.428s 00:31:01.676 user 0m26.622s 00:31:01.676 sys 0m7.573s 00:31:01.676 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:01.676 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:01.676 ************************************ 00:31:01.676 END TEST nvmf_delete_subsystem 00:31:01.676 ************************************ 00:31:01.676 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:01.676 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:01.676 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:01.676 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:01.938 ************************************ 00:31:01.938 START TEST nvmf_host_management 00:31:01.938 ************************************ 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:01.938 * Looking for test storage... 00:31:01.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:01.938 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:01.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.939 --rc genhtml_branch_coverage=1 00:31:01.939 --rc genhtml_function_coverage=1 00:31:01.939 --rc genhtml_legend=1 00:31:01.939 --rc geninfo_all_blocks=1 00:31:01.939 --rc geninfo_unexecuted_blocks=1 00:31:01.939 00:31:01.939 ' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:01.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.939 --rc genhtml_branch_coverage=1 00:31:01.939 --rc genhtml_function_coverage=1 00:31:01.939 --rc genhtml_legend=1 00:31:01.939 --rc geninfo_all_blocks=1 00:31:01.939 --rc geninfo_unexecuted_blocks=1 00:31:01.939 00:31:01.939 ' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:01.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.939 --rc genhtml_branch_coverage=1 00:31:01.939 --rc genhtml_function_coverage=1 00:31:01.939 --rc genhtml_legend=1 00:31:01.939 --rc geninfo_all_blocks=1 00:31:01.939 --rc geninfo_unexecuted_blocks=1 00:31:01.939 00:31:01.939 ' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:01.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.939 --rc genhtml_branch_coverage=1 00:31:01.939 --rc genhtml_function_coverage=1 00:31:01.939 --rc genhtml_legend=1 00:31:01.939 --rc geninfo_all_blocks=1 00:31:01.939 --rc geninfo_unexecuted_blocks=1 00:31:01.939 00:31:01.939 ' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:01.939 07:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:10.090 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:10.090 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:10.090 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:10.090 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:10.090 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:10.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:31:10.091 00:31:10.091 --- 10.0.0.2 ping statistics --- 00:31:10.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.091 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:31:10.091 00:31:10.091 --- 10.0.0.1 ping statistics --- 00:31:10.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.091 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3341796 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3341796 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3341796 ']' 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:10.091 07:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.091 [2024-10-16 07:14:08.944657] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:10.091 [2024-10-16 07:14:08.945808] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:31:10.091 [2024-10-16 07:14:08.945881] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.091 [2024-10-16 07:14:09.035647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:10.091 [2024-10-16 07:14:09.088396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.091 [2024-10-16 07:14:09.088450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.091 [2024-10-16 07:14:09.088459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.091 [2024-10-16 07:14:09.088466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.091 [2024-10-16 07:14:09.088472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.091 [2024-10-16 07:14:09.090864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.091 [2024-10-16 07:14:09.091010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.091 [2024-10-16 07:14:09.091262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:10.091 [2024-10-16 07:14:09.091265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.091 [2024-10-16 07:14:09.168841] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:10.091 [2024-10-16 07:14:09.169842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:10.091 [2024-10-16 07:14:09.170277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:10.091 [2024-10-16 07:14:09.170421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:10.091 [2024-10-16 07:14:09.170488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.353 [2024-10-16 07:14:09.800310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:10.353 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.616 Malloc0 00:31:10.616 [2024-10-16 07:14:09.912594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3341926 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3341926 /var/tmp/bdevperf.sock 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3341926 ']' 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:10.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:10.616 { 00:31:10.616 "params": { 00:31:10.616 "name": "Nvme$subsystem", 00:31:10.616 "trtype": "$TEST_TRANSPORT", 00:31:10.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.616 "adrfam": "ipv4", 00:31:10.616 "trsvcid": "$NVMF_PORT", 00:31:10.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.616 "hdgst": ${hdgst:-false}, 00:31:10.616 "ddgst": ${ddgst:-false} 00:31:10.616 }, 00:31:10.616 "method": "bdev_nvme_attach_controller" 00:31:10.616 } 00:31:10.616 EOF 00:31:10.616 )") 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:31:10.616 07:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:10.616 "params": { 00:31:10.616 "name": "Nvme0", 00:31:10.616 "trtype": "tcp", 00:31:10.616 "traddr": "10.0.0.2", 00:31:10.616 "adrfam": "ipv4", 00:31:10.616 "trsvcid": "4420", 00:31:10.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.616 "hdgst": false, 00:31:10.616 "ddgst": false 00:31:10.616 }, 00:31:10.616 "method": "bdev_nvme_attach_controller" 00:31:10.616 }' 00:31:10.616 [2024-10-16 07:14:10.023923] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:31:10.616 [2024-10-16 07:14:10.024000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341926 ] 00:31:10.616 [2024-10-16 07:14:10.109022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.878 [2024-10-16 07:14:10.165537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.140 Running I/O for 10 seconds... 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:11.402 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:11.403 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:11.403 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.403 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:11.403 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=735 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 735 -ge 100 ']' 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:11.665 [2024-10-16 07:14:10.927973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c49360 is same with the state(6) to be set 00:31:11.665 [2024-10-16 07:14:10.928038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c49360 is same with the state(6) to be set 00:31:11.665 [2024-10-16 07:14:10.928048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c49360 is same with the state(6) to be set 00:31:11.665 [2024-10-16 07:14:10.928056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c49360 is same with the state(6) to be set 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.665 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:11.665 [2024-10-16 07:14:10.935126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.665 [2024-10-16 07:14:10.935193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.665 [2024-10-16 07:14:10.935215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.665 [2024-10-16 07:14:10.935233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.665 [2024-10-16 07:14:10.935250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa830c0 is same with the state(6) to be set 00:31:11.665 [2024-10-16 07:14:10.935329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.665 [2024-10-16 07:14:10.935512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.665 [2024-10-16 07:14:10.935521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.935982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.935991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.666 [2024-10-16 07:14:10.936272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.666 [2024-10-16 07:14:10.936282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.667 [2024-10-16 07:14:10.936535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.667 [2024-10-16 07:14:10.936616] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc9c370 was disconnected and freed. reset controller. 00:31:11.667 [2024-10-16 07:14:10.937826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:11.667 task offset: 106496 on job bdev=Nvme0n1 fails 00:31:11.667 00:31:11.667 Latency(us) 00:31:11.667 [2024-10-16T05:14:11.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.667 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:11.667 Job: Nvme0n1 ended in about 0.53 seconds with error 00:31:11.667 Verification LBA range: start 0x0 length 0x400 00:31:11.667 Nvme0n1 : 0.53 1584.72 99.04 121.90 0.00 36495.59 2293.76 34515.63 00:31:11.667 [2024-10-16T05:14:11.166Z] =================================================================================================================== 00:31:11.667 [2024-10-16T05:14:11.166Z] Total : 1584.72 99.04 121.90 0.00 36495.59 2293.76 34515.63 00:31:11.667 [2024-10-16 07:14:10.940038] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:11.667 [2024-10-16 07:14:10.940075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa830c0 (9): Bad file descriptor 00:31:11.667 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.667 07:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:11.667 [2024-10-16 07:14:11.033056] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3341926 00:31:12.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3341926) - No such process 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:12.611 { 00:31:12.611 "params": { 00:31:12.611 "name": "Nvme$subsystem", 00:31:12.611 "trtype": "$TEST_TRANSPORT", 00:31:12.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.611 "adrfam": "ipv4", 00:31:12.611 "trsvcid": "$NVMF_PORT", 00:31:12.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.611 "hdgst": ${hdgst:-false}, 00:31:12.611 "ddgst": ${ddgst:-false} 00:31:12.611 }, 00:31:12.611 "method": "bdev_nvme_attach_controller" 00:31:12.611 } 00:31:12.611 EOF 00:31:12.611 )") 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:31:12.611 07:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:12.611 "params": { 00:31:12.611 "name": "Nvme0", 00:31:12.611 "trtype": "tcp", 00:31:12.611 "traddr": "10.0.0.2", 00:31:12.611 "adrfam": "ipv4", 00:31:12.611 "trsvcid": "4420", 00:31:12.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.611 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.611 "hdgst": false, 00:31:12.611 "ddgst": false 00:31:12.611 }, 00:31:12.611 "method": "bdev_nvme_attach_controller" 00:31:12.611 }' 00:31:12.611 [2024-10-16 07:14:12.009123] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:31:12.611 [2024-10-16 07:14:12.009204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342294 ] 00:31:12.611 [2024-10-16 07:14:12.089740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.872 [2024-10-16 07:14:12.129905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.872 Running I/O for 1 seconds... 00:31:14.258 1677.00 IOPS, 104.81 MiB/s 00:31:14.258 Latency(us) 00:31:14.258 [2024-10-16T05:14:13.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.258 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.258 Verification LBA range: start 0x0 length 0x400 00:31:14.258 Nvme0n1 : 1.01 1732.14 108.26 0.00 0.00 36254.14 2034.35 34734.08 00:31:14.258 [2024-10-16T05:14:13.757Z] =================================================================================================================== 00:31:14.258 [2024-10-16T05:14:13.757Z] Total : 1732.14 108.26 0.00 0.00 36254.14 2034.35 34734.08 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.258 rmmod nvme_tcp 00:31:14.258 rmmod nvme_fabrics 00:31:14.258 rmmod nvme_keyring 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3341796 ']' 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3341796 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3341796 ']' 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3341796 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3341796 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3341796' 00:31:14.258 killing process with pid 3341796 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3341796 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3341796 00:31:14.258 [2024-10-16 07:14:13.686459] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.258 07:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:16.816 00:31:16.816 real 0m14.603s 00:31:16.816 user 0m19.299s 00:31:16.816 sys 0m7.420s 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:16.816 ************************************ 00:31:16.816 END TEST nvmf_host_management 00:31:16.816 ************************************ 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:16.816 ************************************ 00:31:16.816 START TEST nvmf_lvol 00:31:16.816 ************************************ 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:16.816 * Looking for test storage... 00:31:16.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:16.816 07:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:16.816 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.817 --rc genhtml_branch_coverage=1 00:31:16.817 --rc genhtml_function_coverage=1 00:31:16.817 --rc genhtml_legend=1 00:31:16.817 --rc geninfo_all_blocks=1 00:31:16.817 --rc geninfo_unexecuted_blocks=1 00:31:16.817 00:31:16.817 ' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.817 --rc genhtml_branch_coverage=1 00:31:16.817 --rc genhtml_function_coverage=1 00:31:16.817 --rc genhtml_legend=1 00:31:16.817 --rc geninfo_all_blocks=1 00:31:16.817 --rc geninfo_unexecuted_blocks=1 00:31:16.817 00:31:16.817 ' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.817 --rc genhtml_branch_coverage=1 00:31:16.817 --rc genhtml_function_coverage=1 00:31:16.817 --rc genhtml_legend=1 00:31:16.817 --rc geninfo_all_blocks=1 00:31:16.817 --rc geninfo_unexecuted_blocks=1 00:31:16.817 00:31:16.817 ' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:16.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.817 --rc genhtml_branch_coverage=1 00:31:16.817 --rc genhtml_function_coverage=1 00:31:16.817 --rc genhtml_legend=1 00:31:16.817 --rc geninfo_all_blocks=1 00:31:16.817 --rc geninfo_unexecuted_blocks=1 00:31:16.817 00:31:16.817 ' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:16.817 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:16.818 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.818 07:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:25.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.062 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:25.063 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:25.063 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:25.063 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:31:25.063 00:31:25.063 --- 10.0.0.2 ping statistics --- 00:31:25.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.063 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:31:25.063 00:31:25.063 --- 10.0.0.1 ping statistics --- 00:31:25.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.063 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3346900 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3346900 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3346900 ']' 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:25.063 07:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:25.063 [2024-10-16 07:14:23.664374] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:25.063 [2024-10-16 07:14:23.665510] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:31:25.063 [2024-10-16 07:14:23.665565] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.063 [2024-10-16 07:14:23.753362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:25.063 [2024-10-16 07:14:23.806099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.063 [2024-10-16 07:14:23.806152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.063 [2024-10-16 07:14:23.806161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.063 [2024-10-16 07:14:23.806168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.063 [2024-10-16 07:14:23.806174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.063 [2024-10-16 07:14:23.807945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.063 [2024-10-16 07:14:23.808115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.063 [2024-10-16 07:14:23.808115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.063 [2024-10-16 07:14:23.884321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:25.063 [2024-10-16 07:14:23.885303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:25.063 [2024-10-16 07:14:23.885496] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:25.063 [2024-10-16 07:14:23.885693] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:25.063 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:25.063 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:31:25.064 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:25.064 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:25.064 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:25.064 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.064 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:25.324 [2024-10-16 07:14:24.681211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.324 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:25.585 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:25.586 07:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:25.846 07:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:25.846 07:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:25.846 07:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:26.106 07:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2327a7b9-c6e8-4d62-9ab1-dac0dd502430 00:31:26.106 07:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2327a7b9-c6e8-4d62-9ab1-dac0dd502430 lvol 20 00:31:26.367 07:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f3ea6fa5-db3b-4321-87ca-b64892613d18 00:31:26.367 07:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:26.627 07:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f3ea6fa5-db3b-4321-87ca-b64892613d18 00:31:26.627 07:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.888 [2024-10-16 07:14:26.241103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.888 07:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:27.149 07:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3347324 00:31:27.149 07:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:27.150 07:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:28.093 07:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f3ea6fa5-db3b-4321-87ca-b64892613d18 MY_SNAPSHOT 00:31:28.354 07:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2d86b31a-01fe-4454-ba30-06983e1db538 00:31:28.354 07:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f3ea6fa5-db3b-4321-87ca-b64892613d18 30 00:31:28.615 07:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2d86b31a-01fe-4454-ba30-06983e1db538 MY_CLONE 00:31:28.876 07:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a9b77799-49ad-4aa8-8abd-37da110b7f37 00:31:28.876 07:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a9b77799-49ad-4aa8-8abd-37da110b7f37 00:31:29.448 07:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3347324 00:31:37.585 Initializing NVMe Controllers 00:31:37.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:37.585 Controller IO queue size 128, less than required. 00:31:37.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:37.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:37.585 Initialization complete. Launching workers. 00:31:37.585 ======================================================== 00:31:37.585 Latency(us) 00:31:37.585 Device Information : IOPS MiB/s Average min max 00:31:37.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15328.70 59.88 8353.00 1886.43 65126.64 00:31:37.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15219.90 59.45 8410.04 4446.82 73677.69 00:31:37.585 ======================================================== 00:31:37.585 Total : 30548.60 119.33 8381.42 1886.43 73677.69 00:31:37.585 00:31:37.585 07:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:37.585 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f3ea6fa5-db3b-4321-87ca-b64892613d18 00:31:37.846 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2327a7b9-c6e8-4d62-9ab1-dac0dd502430 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.107 rmmod nvme_tcp 00:31:38.107 rmmod nvme_fabrics 00:31:38.107 rmmod nvme_keyring 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3346900 ']' 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3346900 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3346900 ']' 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3346900 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3346900 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3346900' 00:31:38.107 killing process with pid 3346900 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3346900 00:31:38.107 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3346900 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.367 07:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.279 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.279 00:31:40.279 real 0m23.830s 00:31:40.279 user 0m56.193s 00:31:40.279 sys 0m10.545s 00:31:40.279 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.279 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:40.279 ************************************ 00:31:40.279 END TEST nvmf_lvol 00:31:40.279 ************************************ 00:31:40.279 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:40.279 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:40.279 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:40.279 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 ************************************ 00:31:40.541 START TEST nvmf_lvs_grow 00:31:40.541 ************************************ 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:40.541 * Looking for test storage... 00:31:40.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:40.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.541 --rc genhtml_branch_coverage=1 00:31:40.541 --rc genhtml_function_coverage=1 00:31:40.541 --rc genhtml_legend=1 00:31:40.541 --rc geninfo_all_blocks=1 00:31:40.541 --rc geninfo_unexecuted_blocks=1 00:31:40.541 00:31:40.541 ' 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:40.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.541 --rc genhtml_branch_coverage=1 00:31:40.541 --rc genhtml_function_coverage=1 00:31:40.541 --rc genhtml_legend=1 00:31:40.541 --rc geninfo_all_blocks=1 00:31:40.541 --rc geninfo_unexecuted_blocks=1 00:31:40.541 00:31:40.541 ' 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:40.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.541 --rc genhtml_branch_coverage=1 00:31:40.541 --rc genhtml_function_coverage=1 00:31:40.541 --rc genhtml_legend=1 00:31:40.541 --rc geninfo_all_blocks=1 00:31:40.541 --rc geninfo_unexecuted_blocks=1 00:31:40.541 00:31:40.541 ' 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:40.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.541 --rc genhtml_branch_coverage=1 00:31:40.541 --rc genhtml_function_coverage=1 00:31:40.541 --rc genhtml_legend=1 00:31:40.541 --rc geninfo_all_blocks=1 00:31:40.541 --rc geninfo_unexecuted_blocks=1 00:31:40.541 00:31:40.541 ' 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.541 07:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.541 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.542 07:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:48.683 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:48.683 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:48.683 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:48.683 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.683 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:31:48.684 00:31:48.684 --- 10.0.0.2 ping statistics --- 00:31:48.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.684 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:31:48.684 00:31:48.684 --- 10.0.0.1 ping statistics --- 00:31:48.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.684 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3353651 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3353651 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3353651 ']' 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:48.684 07:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:48.684 [2024-10-16 07:14:47.511396] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:48.684 [2024-10-16 07:14:47.512508] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:31:48.684 [2024-10-16 07:14:47.512559] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.684 [2024-10-16 07:14:47.598607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.684 [2024-10-16 07:14:47.649236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.684 [2024-10-16 07:14:47.649282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.684 [2024-10-16 07:14:47.649290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.684 [2024-10-16 07:14:47.649298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.684 [2024-10-16 07:14:47.649304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.684 [2024-10-16 07:14:47.650059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.684 [2024-10-16 07:14:47.725894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:48.684 [2024-10-16 07:14:47.726193] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:48.945 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:48.945 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:31:48.945 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:48.945 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:48.945 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:48.945 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.945 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:49.206 [2024-10-16 07:14:48.522962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:49.206 ************************************ 00:31:49.206 START TEST lvs_grow_clean 00:31:49.206 ************************************ 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:49.206 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:49.467 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:49.467 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:49.728 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=aeb729fe-9c52-4568-8306-8e501dc075fa 00:31:49.728 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:31:49.728 07:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:49.728 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:49.728 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:49.728 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aeb729fe-9c52-4568-8306-8e501dc075fa lvol 150 00:31:49.988 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=62555f93-7ea3-4809-99b0-00ab70f43a73 00:31:49.988 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:49.988 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:50.249 [2024-10-16 07:14:49.530590] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:50.249 [2024-10-16 07:14:49.530749] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:50.249 true 00:31:50.249 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:50.249 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:31:50.249 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:50.249 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:50.510 07:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 62555f93-7ea3-4809-99b0-00ab70f43a73 00:31:50.771 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.771 [2024-10-16 07:14:50.255321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3354163 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3354163 /var/tmp/bdevperf.sock 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3354163 ']' 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:51.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:51.033 07:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:51.033 [2024-10-16 07:14:50.531335] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:31:51.033 [2024-10-16 07:14:50.531410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354163 ] 00:31:51.294 [2024-10-16 07:14:50.612428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.294 [2024-10-16 07:14:50.665131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.866 07:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:51.866 07:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:31:51.866 07:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:52.127 Nvme0n1 00:31:52.127 07:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:52.388 [ 00:31:52.388 { 00:31:52.388 "name": "Nvme0n1", 00:31:52.388 "aliases": [ 00:31:52.388 "62555f93-7ea3-4809-99b0-00ab70f43a73" 00:31:52.388 ], 00:31:52.388 "product_name": "NVMe disk", 00:31:52.388 "block_size": 4096, 00:31:52.388 "num_blocks": 38912, 00:31:52.388 "uuid": "62555f93-7ea3-4809-99b0-00ab70f43a73", 00:31:52.388 "numa_id": 0, 00:31:52.388 "assigned_rate_limits": { 00:31:52.388 "rw_ios_per_sec": 0, 00:31:52.388 "rw_mbytes_per_sec": 0, 00:31:52.388 "r_mbytes_per_sec": 0, 00:31:52.388 "w_mbytes_per_sec": 0 00:31:52.388 }, 00:31:52.388 "claimed": false, 00:31:52.388 "zoned": false, 00:31:52.388 "supported_io_types": { 00:31:52.388 "read": true, 00:31:52.388 "write": true, 00:31:52.388 "unmap": true, 00:31:52.388 "flush": true, 00:31:52.388 "reset": true, 00:31:52.388 "nvme_admin": true, 00:31:52.388 "nvme_io": true, 00:31:52.388 "nvme_io_md": false, 00:31:52.388 "write_zeroes": true, 00:31:52.388 "zcopy": false, 00:31:52.388 "get_zone_info": false, 00:31:52.388 "zone_management": false, 00:31:52.388 "zone_append": false, 00:31:52.388 "compare": true, 00:31:52.388 "compare_and_write": true, 00:31:52.388 "abort": true, 00:31:52.388 "seek_hole": false, 00:31:52.388 "seek_data": false, 00:31:52.388 "copy": true, 00:31:52.388 "nvme_iov_md": false 00:31:52.388 }, 00:31:52.388 "memory_domains": [ 00:31:52.388 { 00:31:52.388 "dma_device_id": "system", 00:31:52.388 "dma_device_type": 1 00:31:52.388 } 00:31:52.388 ], 00:31:52.388 "driver_specific": { 00:31:52.388 "nvme": [ 00:31:52.388 { 00:31:52.388 "trid": { 00:31:52.388 "trtype": "TCP", 00:31:52.388 "adrfam": "IPv4", 00:31:52.388 "traddr": "10.0.0.2", 00:31:52.388 "trsvcid": "4420", 00:31:52.388 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:52.388 }, 00:31:52.388 "ctrlr_data": { 00:31:52.388 "cntlid": 1, 00:31:52.388 "vendor_id": "0x8086", 00:31:52.388 "model_number": "SPDK bdev Controller", 00:31:52.388 "serial_number": "SPDK0", 00:31:52.388 "firmware_revision": "25.01", 00:31:52.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:52.388 "oacs": { 00:31:52.388 "security": 0, 00:31:52.388 "format": 0, 00:31:52.388 "firmware": 0, 00:31:52.388 "ns_manage": 0 00:31:52.388 }, 00:31:52.388 "multi_ctrlr": true, 00:31:52.388 "ana_reporting": false 00:31:52.388 }, 00:31:52.388 "vs": { 00:31:52.388 "nvme_version": "1.3" 00:31:52.388 }, 00:31:52.388 "ns_data": { 00:31:52.388 "id": 1, 00:31:52.388 "can_share": true 00:31:52.388 } 00:31:52.388 } 00:31:52.388 ], 00:31:52.388 "mp_policy": "active_passive" 00:31:52.388 } 00:31:52.388 } 00:31:52.388 ] 00:31:52.388 07:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3354377 00:31:52.388 07:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:52.388 07:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:52.649 Running I/O for 10 seconds... 00:31:53.592 Latency(us) 00:31:53.592 [2024-10-16T05:14:53.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.592 Nvme0n1 : 1.00 16325.00 63.77 0.00 0.00 0.00 0.00 0.00 00:31:53.592 [2024-10-16T05:14:53.091Z] =================================================================================================================== 00:31:53.592 [2024-10-16T05:14:53.091Z] Total : 16325.00 63.77 0.00 0.00 0.00 0.00 0.00 00:31:53.592 00:31:54.535 07:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:31:54.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.535 Nvme0n1 : 2.00 16550.50 64.65 0.00 0.00 0.00 0.00 0.00 00:31:54.535 [2024-10-16T05:14:54.034Z] =================================================================================================================== 00:31:54.535 [2024-10-16T05:14:54.034Z] Total : 16550.50 64.65 0.00 0.00 0.00 0.00 0.00 00:31:54.535 00:31:54.535 true 00:31:54.536 07:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:31:54.536 07:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:54.796 07:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:54.796 07:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:54.796 07:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3354377 00:31:55.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.737 Nvme0n1 : 3.00 16759.00 65.46 0.00 0.00 0.00 0.00 0.00 00:31:55.737 [2024-10-16T05:14:55.236Z] =================================================================================================================== 00:31:55.737 [2024-10-16T05:14:55.236Z] Total : 16759.00 65.46 0.00 0.00 0.00 0.00 0.00 00:31:55.737 00:31:56.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.678 Nvme0n1 : 4.00 16931.25 66.14 0.00 0.00 0.00 0.00 0.00 00:31:56.678 [2024-10-16T05:14:56.177Z] =================================================================================================================== 00:31:56.678 [2024-10-16T05:14:56.177Z] Total : 16931.25 66.14 0.00 0.00 0.00 0.00 0.00 00:31:56.678 00:31:57.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.620 Nvme0n1 : 5.00 17615.40 68.81 0.00 0.00 0.00 0.00 0.00 00:31:57.620 [2024-10-16T05:14:57.119Z] =================================================================================================================== 00:31:57.620 [2024-10-16T05:14:57.119Z] Total : 17615.40 68.81 0.00 0.00 0.00 0.00 0.00 00:31:57.620 00:31:58.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.562 Nvme0n1 : 6.00 18756.83 73.27 0.00 0.00 0.00 0.00 0.00 00:31:58.562 [2024-10-16T05:14:58.061Z] =================================================================================================================== 00:31:58.562 [2024-10-16T05:14:58.061Z] Total : 18756.83 73.27 0.00 0.00 0.00 0.00 0.00 00:31:58.562 00:31:59.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.515 Nvme0n1 : 7.00 19575.57 76.47 0.00 0.00 0.00 0.00 0.00 00:31:59.515 [2024-10-16T05:14:59.014Z] =================================================================================================================== 00:31:59.515 [2024-10-16T05:14:59.014Z] Total : 19575.57 76.47 0.00 0.00 0.00 0.00 0.00 00:31:59.515 00:32:00.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.457 Nvme0n1 : 8.00 20194.62 78.89 0.00 0.00 0.00 0.00 0.00 00:32:00.457 [2024-10-16T05:14:59.956Z] =================================================================================================================== 00:32:00.457 [2024-10-16T05:14:59.956Z] Total : 20194.62 78.89 0.00 0.00 0.00 0.00 0.00 00:32:00.457 00:32:01.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.842 Nvme0n1 : 9.00 20673.44 80.76 0.00 0.00 0.00 0.00 0.00 00:32:01.842 [2024-10-16T05:15:01.341Z] =================================================================================================================== 00:32:01.842 [2024-10-16T05:15:01.341Z] Total : 20673.44 80.76 0.00 0.00 0.00 0.00 0.00 00:32:01.842 00:32:02.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.782 Nvme0n1 : 10.00 21057.30 82.26 0.00 0.00 0.00 0.00 0.00 00:32:02.782 [2024-10-16T05:15:02.281Z] =================================================================================================================== 00:32:02.782 [2024-10-16T05:15:02.281Z] Total : 21057.30 82.26 0.00 0.00 0.00 0.00 0.00 00:32:02.782 00:32:02.782 00:32:02.782 Latency(us) 00:32:02.782 [2024-10-16T05:15:02.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.782 Nvme0n1 : 10.00 21058.89 82.26 0.00 0.00 6074.11 3986.77 22500.69 00:32:02.782 [2024-10-16T05:15:02.281Z] =================================================================================================================== 00:32:02.782 [2024-10-16T05:15:02.281Z] Total : 21058.89 82.26 0.00 0.00 6074.11 3986.77 22500.69 00:32:02.782 { 00:32:02.782 "results": [ 00:32:02.782 { 00:32:02.782 "job": "Nvme0n1", 00:32:02.782 "core_mask": "0x2", 00:32:02.782 "workload": "randwrite", 00:32:02.782 "status": "finished", 00:32:02.782 "queue_depth": 128, 00:32:02.782 "io_size": 4096, 00:32:02.782 "runtime": 10.004945, 00:32:02.782 "iops": 21058.88638068475, 00:32:02.782 "mibps": 82.26127492454981, 00:32:02.782 "io_failed": 0, 00:32:02.782 "io_timeout": 0, 00:32:02.782 "avg_latency_us": 6074.111921832556, 00:32:02.782 "min_latency_us": 3986.7733333333335, 00:32:02.782 "max_latency_us": 22500.693333333333 00:32:02.782 } 00:32:02.782 ], 00:32:02.782 "core_count": 1 00:32:02.782 } 00:32:02.782 07:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3354163 00:32:02.782 07:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3354163 ']' 00:32:02.782 07:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3354163 00:32:02.782 07:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:32:02.782 07:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:02.782 07:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3354163 00:32:02.782 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:02.782 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:02.782 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3354163' 00:32:02.782 killing process with pid 3354163 00:32:02.782 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3354163 00:32:02.782 Received shutdown signal, test time was about 10.000000 seconds 00:32:02.782 00:32:02.782 Latency(us) 00:32:02.782 [2024-10-16T05:15:02.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.782 [2024-10-16T05:15:02.281Z] =================================================================================================================== 00:32:02.782 [2024-10-16T05:15:02.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:02.782 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3354163 00:32:02.782 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:03.043 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:03.043 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:32:03.043 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:03.304 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:03.304 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:03.304 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:03.304 [2024-10-16 07:15:02.782639] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:03.564 07:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:32:03.564 request: 00:32:03.564 { 00:32:03.564 "uuid": "aeb729fe-9c52-4568-8306-8e501dc075fa", 00:32:03.564 "method": "bdev_lvol_get_lvstores", 00:32:03.564 "req_id": 1 00:32:03.564 } 00:32:03.564 Got JSON-RPC error response 00:32:03.564 response: 00:32:03.564 { 00:32:03.564 "code": -19, 00:32:03.564 "message": "No such device" 00:32:03.564 } 00:32:03.564 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:03.564 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:03.564 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:03.564 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:03.564 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:03.824 aio_bdev 00:32:03.824 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 62555f93-7ea3-4809-99b0-00ab70f43a73 00:32:03.824 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=62555f93-7ea3-4809-99b0-00ab70f43a73 00:32:03.824 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:03.824 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:32:03.824 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:03.824 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:03.824 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:04.085 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 62555f93-7ea3-4809-99b0-00ab70f43a73 -t 2000 00:32:04.085 [ 00:32:04.085 { 00:32:04.085 "name": "62555f93-7ea3-4809-99b0-00ab70f43a73", 00:32:04.085 "aliases": [ 00:32:04.085 "lvs/lvol" 00:32:04.085 ], 00:32:04.085 "product_name": "Logical Volume", 00:32:04.085 "block_size": 4096, 00:32:04.085 "num_blocks": 38912, 00:32:04.085 "uuid": "62555f93-7ea3-4809-99b0-00ab70f43a73", 00:32:04.085 "assigned_rate_limits": { 00:32:04.085 "rw_ios_per_sec": 0, 00:32:04.085 "rw_mbytes_per_sec": 0, 00:32:04.085 "r_mbytes_per_sec": 0, 00:32:04.085 "w_mbytes_per_sec": 0 00:32:04.085 }, 00:32:04.085 "claimed": false, 00:32:04.085 "zoned": false, 00:32:04.085 "supported_io_types": { 00:32:04.085 "read": true, 00:32:04.085 "write": true, 00:32:04.085 "unmap": true, 00:32:04.085 "flush": false, 00:32:04.085 "reset": true, 00:32:04.085 "nvme_admin": false, 00:32:04.085 "nvme_io": false, 00:32:04.085 "nvme_io_md": false, 00:32:04.085 "write_zeroes": true, 00:32:04.085 "zcopy": false, 00:32:04.085 "get_zone_info": false, 00:32:04.085 "zone_management": false, 00:32:04.085 "zone_append": false, 00:32:04.085 "compare": false, 00:32:04.085 "compare_and_write": false, 00:32:04.085 "abort": false, 00:32:04.085 "seek_hole": true, 00:32:04.085 "seek_data": true, 00:32:04.085 "copy": false, 00:32:04.085 "nvme_iov_md": false 00:32:04.085 }, 00:32:04.085 "driver_specific": { 00:32:04.085 "lvol": { 00:32:04.085 "lvol_store_uuid": "aeb729fe-9c52-4568-8306-8e501dc075fa", 00:32:04.085 "base_bdev": "aio_bdev", 00:32:04.085 "thin_provision": false, 00:32:04.085 "num_allocated_clusters": 38, 00:32:04.085 "snapshot": false, 00:32:04.085 "clone": false, 00:32:04.085 "esnap_clone": false 00:32:04.085 } 00:32:04.085 } 00:32:04.085 } 00:32:04.085 ] 00:32:04.085 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:32:04.085 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:32:04.085 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:04.346 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:04.346 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:32:04.346 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:04.606 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:04.606 07:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 62555f93-7ea3-4809-99b0-00ab70f43a73 00:32:04.606 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aeb729fe-9c52-4568-8306-8e501dc075fa 00:32:04.866 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:05.127 00:32:05.127 real 0m15.865s 00:32:05.127 user 0m15.448s 00:32:05.127 sys 0m1.528s 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:05.127 ************************************ 00:32:05.127 END TEST lvs_grow_clean 00:32:05.127 ************************************ 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:05.127 ************************************ 00:32:05.127 START TEST lvs_grow_dirty 00:32:05.127 ************************************ 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:05.127 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:05.388 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:05.388 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:05.654 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:05.654 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:05.654 07:15:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:05.654 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:05.654 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:05.654 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 796eb69b-318a-4c85-b48b-19e5c32380c6 lvol 150 00:32:05.913 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c09ae66d-7702-40ba-ac50-d43f09d05cda 00:32:05.913 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:05.913 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:06.173 [2024-10-16 07:15:05.454577] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:06.173 [2024-10-16 07:15:05.454727] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:06.173 true 00:32:06.173 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:06.173 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:06.173 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:06.173 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:06.435 07:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c09ae66d-7702-40ba-ac50-d43f09d05cda 00:32:06.695 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:06.695 [2024-10-16 07:15:06.171229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.695 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3357218 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3357218 /var/tmp/bdevperf.sock 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3357218 ']' 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:06.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.955 07:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:06.955 [2024-10-16 07:15:06.405298] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:32:06.955 [2024-10-16 07:15:06.405356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3357218 ] 00:32:07.215 [2024-10-16 07:15:06.480641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.215 [2024-10-16 07:15:06.510610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.785 07:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:07.785 07:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:32:07.785 07:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:08.356 Nvme0n1 00:32:08.356 07:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:08.356 [ 00:32:08.356 { 00:32:08.356 "name": "Nvme0n1", 00:32:08.356 "aliases": [ 00:32:08.356 "c09ae66d-7702-40ba-ac50-d43f09d05cda" 00:32:08.356 ], 00:32:08.356 "product_name": "NVMe disk", 00:32:08.356 "block_size": 4096, 00:32:08.356 "num_blocks": 38912, 00:32:08.356 "uuid": "c09ae66d-7702-40ba-ac50-d43f09d05cda", 00:32:08.356 "numa_id": 0, 00:32:08.356 "assigned_rate_limits": { 00:32:08.356 "rw_ios_per_sec": 0, 00:32:08.356 "rw_mbytes_per_sec": 0, 00:32:08.356 "r_mbytes_per_sec": 0, 00:32:08.356 "w_mbytes_per_sec": 0 00:32:08.356 }, 00:32:08.356 "claimed": false, 00:32:08.356 "zoned": false, 00:32:08.356 "supported_io_types": { 00:32:08.356 "read": true, 00:32:08.356 "write": true, 00:32:08.356 "unmap": true, 00:32:08.356 "flush": true, 00:32:08.356 "reset": true, 00:32:08.356 "nvme_admin": true, 00:32:08.356 "nvme_io": true, 00:32:08.356 "nvme_io_md": false, 00:32:08.356 "write_zeroes": true, 00:32:08.356 "zcopy": false, 00:32:08.356 "get_zone_info": false, 00:32:08.356 "zone_management": false, 00:32:08.356 "zone_append": false, 00:32:08.356 "compare": true, 00:32:08.356 "compare_and_write": true, 00:32:08.356 "abort": true, 00:32:08.356 "seek_hole": false, 00:32:08.356 "seek_data": false, 00:32:08.356 "copy": true, 00:32:08.356 "nvme_iov_md": false 00:32:08.356 }, 00:32:08.356 "memory_domains": [ 00:32:08.356 { 00:32:08.356 "dma_device_id": "system", 00:32:08.356 "dma_device_type": 1 00:32:08.356 } 00:32:08.356 ], 00:32:08.356 "driver_specific": { 00:32:08.356 "nvme": [ 00:32:08.356 { 00:32:08.356 "trid": { 00:32:08.356 "trtype": "TCP", 00:32:08.356 "adrfam": "IPv4", 00:32:08.356 "traddr": "10.0.0.2", 00:32:08.356 "trsvcid": "4420", 00:32:08.356 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:08.356 }, 00:32:08.356 "ctrlr_data": { 00:32:08.356 "cntlid": 1, 00:32:08.356 "vendor_id": "0x8086", 00:32:08.356 "model_number": "SPDK bdev Controller", 00:32:08.356 "serial_number": "SPDK0", 00:32:08.356 "firmware_revision": "25.01", 00:32:08.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:08.356 "oacs": { 00:32:08.356 "security": 0, 00:32:08.356 "format": 0, 00:32:08.356 "firmware": 0, 00:32:08.356 "ns_manage": 0 00:32:08.356 }, 00:32:08.356 "multi_ctrlr": true, 00:32:08.356 "ana_reporting": false 00:32:08.356 }, 00:32:08.356 "vs": { 00:32:08.356 "nvme_version": "1.3" 00:32:08.356 }, 00:32:08.356 "ns_data": { 00:32:08.356 "id": 1, 00:32:08.356 "can_share": true 00:32:08.356 } 00:32:08.356 } 00:32:08.356 ], 00:32:08.356 "mp_policy": "active_passive" 00:32:08.356 } 00:32:08.356 } 00:32:08.356 ] 00:32:08.357 07:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3357557 00:32:08.357 07:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:08.357 07:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:08.357 Running I/O for 10 seconds... 00:32:09.739 Latency(us) 00:32:09.739 [2024-10-16T05:15:09.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.739 Nvme0n1 : 1.00 24368.00 95.19 0.00 0.00 0.00 0.00 0.00 00:32:09.739 [2024-10-16T05:15:09.238Z] =================================================================================================================== 00:32:09.739 [2024-10-16T05:15:09.238Z] Total : 24368.00 95.19 0.00 0.00 0.00 0.00 0.00 00:32:09.739 00:32:10.311 07:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:10.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.572 Nvme0n1 : 2.00 24792.00 96.84 0.00 0.00 0.00 0.00 0.00 00:32:10.572 [2024-10-16T05:15:10.071Z] =================================================================================================================== 00:32:10.572 [2024-10-16T05:15:10.071Z] Total : 24792.00 96.84 0.00 0.00 0.00 0.00 0.00 00:32:10.572 00:32:10.572 true 00:32:10.572 07:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:10.572 07:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:10.833 07:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:10.833 07:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:10.833 07:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3357557 00:32:11.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.404 Nvme0n1 : 3.00 24912.00 97.31 0.00 0.00 0.00 0.00 0.00 00:32:11.404 [2024-10-16T05:15:10.903Z] =================================================================================================================== 00:32:11.404 [2024-10-16T05:15:10.903Z] Total : 24912.00 97.31 0.00 0.00 0.00 0.00 0.00 00:32:11.404 00:32:12.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.378 Nvme0n1 : 4.00 25018.25 97.73 0.00 0.00 0.00 0.00 0.00 00:32:12.378 [2024-10-16T05:15:11.877Z] =================================================================================================================== 00:32:12.378 [2024-10-16T05:15:11.877Z] Total : 25018.25 97.73 0.00 0.00 0.00 0.00 0.00 00:32:12.378 00:32:13.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.797 Nvme0n1 : 5.00 25096.40 98.03 0.00 0.00 0.00 0.00 0.00 00:32:13.797 [2024-10-16T05:15:13.296Z] =================================================================================================================== 00:32:13.797 [2024-10-16T05:15:13.296Z] Total : 25096.40 98.03 0.00 0.00 0.00 0.00 0.00 00:32:13.797 00:32:14.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:14.369 Nvme0n1 : 6.00 25148.00 98.23 0.00 0.00 0.00 0.00 0.00 00:32:14.369 [2024-10-16T05:15:13.868Z] =================================================================================================================== 00:32:14.369 [2024-10-16T05:15:13.868Z] Total : 25148.00 98.23 0.00 0.00 0.00 0.00 0.00 00:32:14.369 00:32:15.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.753 Nvme0n1 : 7.00 25180.86 98.36 0.00 0.00 0.00 0.00 0.00 00:32:15.753 [2024-10-16T05:15:15.252Z] =================================================================================================================== 00:32:15.753 [2024-10-16T05:15:15.252Z] Total : 25180.86 98.36 0.00 0.00 0.00 0.00 0.00 00:32:15.753 00:32:16.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:16.696 Nvme0n1 : 8.00 25213.00 98.49 0.00 0.00 0.00 0.00 0.00 00:32:16.696 [2024-10-16T05:15:16.195Z] =================================================================================================================== 00:32:16.696 [2024-10-16T05:15:16.195Z] Total : 25213.00 98.49 0.00 0.00 0.00 0.00 0.00 00:32:16.696 00:32:17.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:17.636 Nvme0n1 : 9.00 25234.89 98.57 0.00 0.00 0.00 0.00 0.00 00:32:17.636 [2024-10-16T05:15:17.135Z] =================================================================================================================== 00:32:17.636 [2024-10-16T05:15:17.135Z] Total : 25234.89 98.57 0.00 0.00 0.00 0.00 0.00 00:32:17.636 00:32:18.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.577 Nvme0n1 : 10.00 25252.00 98.64 0.00 0.00 0.00 0.00 0.00 00:32:18.577 [2024-10-16T05:15:18.076Z] =================================================================================================================== 00:32:18.577 [2024-10-16T05:15:18.076Z] Total : 25252.00 98.64 0.00 0.00 0.00 0.00 0.00 00:32:18.577 00:32:18.577 00:32:18.577 Latency(us) 00:32:18.577 [2024-10-16T05:15:18.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.577 Nvme0n1 : 10.00 25253.88 98.65 0.00 0.00 5065.59 3181.23 30365.01 00:32:18.577 [2024-10-16T05:15:18.076Z] =================================================================================================================== 00:32:18.577 [2024-10-16T05:15:18.076Z] Total : 25253.88 98.65 0.00 0.00 5065.59 3181.23 30365.01 00:32:18.577 { 00:32:18.577 "results": [ 00:32:18.577 { 00:32:18.577 "job": "Nvme0n1", 00:32:18.577 "core_mask": "0x2", 00:32:18.577 "workload": "randwrite", 00:32:18.577 "status": "finished", 00:32:18.577 "queue_depth": 128, 00:32:18.577 "io_size": 4096, 00:32:18.577 "runtime": 10.004326, 00:32:18.577 "iops": 25253.8751735999, 00:32:18.577 "mibps": 98.64794989687461, 00:32:18.577 "io_failed": 0, 00:32:18.577 "io_timeout": 0, 00:32:18.577 "avg_latency_us": 5065.593319189808, 00:32:18.577 "min_latency_us": 3181.2266666666665, 00:32:18.577 "max_latency_us": 30365.013333333332 00:32:18.577 } 00:32:18.577 ], 00:32:18.577 "core_count": 1 00:32:18.577 } 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3357218 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3357218 ']' 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3357218 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3357218 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3357218' 00:32:18.577 killing process with pid 3357218 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3357218 00:32:18.577 Received shutdown signal, test time was about 10.000000 seconds 00:32:18.577 00:32:18.577 Latency(us) 00:32:18.577 [2024-10-16T05:15:18.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.577 [2024-10-16T05:15:18.076Z] =================================================================================================================== 00:32:18.577 [2024-10-16T05:15:18.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:18.577 07:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3357218 00:32:18.577 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:18.838 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:19.099 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:19.099 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:19.099 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:19.099 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:19.099 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3353651 00:32:19.099 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3353651 00:32:19.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3353651 Killed "${NVMF_APP[@]}" "$@" 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3360036 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3360036 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3360036 ']' 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.359 07:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:19.359 [2024-10-16 07:15:18.678320] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:19.359 [2024-10-16 07:15:18.679320] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:32:19.359 [2024-10-16 07:15:18.679366] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.359 [2024-10-16 07:15:18.762639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.359 [2024-10-16 07:15:18.794463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.359 [2024-10-16 07:15:18.794495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.359 [2024-10-16 07:15:18.794501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:19.359 [2024-10-16 07:15:18.794506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:19.359 [2024-10-16 07:15:18.794510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.359 [2024-10-16 07:15:18.794980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.359 [2024-10-16 07:15:18.846079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:19.359 [2024-10-16 07:15:18.846286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:20.303 [2024-10-16 07:15:19.685063] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:20.303 [2024-10-16 07:15:19.685289] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:20.303 [2024-10-16 07:15:19.685378] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c09ae66d-7702-40ba-ac50-d43f09d05cda 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c09ae66d-7702-40ba-ac50-d43f09d05cda 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:20.303 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:20.563 07:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c09ae66d-7702-40ba-ac50-d43f09d05cda -t 2000 00:32:20.563 [ 00:32:20.563 { 00:32:20.563 "name": "c09ae66d-7702-40ba-ac50-d43f09d05cda", 00:32:20.563 "aliases": [ 00:32:20.563 "lvs/lvol" 00:32:20.563 ], 00:32:20.563 "product_name": "Logical Volume", 00:32:20.563 "block_size": 4096, 00:32:20.563 "num_blocks": 38912, 00:32:20.563 "uuid": "c09ae66d-7702-40ba-ac50-d43f09d05cda", 00:32:20.563 "assigned_rate_limits": { 00:32:20.563 "rw_ios_per_sec": 0, 00:32:20.563 "rw_mbytes_per_sec": 0, 00:32:20.563 "r_mbytes_per_sec": 0, 00:32:20.563 "w_mbytes_per_sec": 0 00:32:20.563 }, 00:32:20.563 "claimed": false, 00:32:20.563 "zoned": false, 00:32:20.563 "supported_io_types": { 00:32:20.563 "read": true, 00:32:20.563 "write": true, 00:32:20.563 "unmap": true, 00:32:20.563 "flush": false, 00:32:20.563 "reset": true, 00:32:20.563 "nvme_admin": false, 00:32:20.563 "nvme_io": false, 00:32:20.563 "nvme_io_md": false, 00:32:20.563 "write_zeroes": true, 00:32:20.563 "zcopy": false, 00:32:20.563 "get_zone_info": false, 00:32:20.563 "zone_management": false, 00:32:20.563 "zone_append": false, 00:32:20.563 "compare": false, 00:32:20.563 "compare_and_write": false, 00:32:20.563 "abort": false, 00:32:20.563 "seek_hole": true, 00:32:20.563 "seek_data": true, 00:32:20.563 "copy": false, 00:32:20.563 "nvme_iov_md": false 00:32:20.563 }, 00:32:20.563 "driver_specific": { 00:32:20.563 "lvol": { 00:32:20.563 "lvol_store_uuid": "796eb69b-318a-4c85-b48b-19e5c32380c6", 00:32:20.563 "base_bdev": "aio_bdev", 00:32:20.563 "thin_provision": false, 00:32:20.563 "num_allocated_clusters": 38, 00:32:20.564 "snapshot": false, 00:32:20.564 "clone": false, 00:32:20.564 "esnap_clone": false 00:32:20.564 } 00:32:20.564 } 00:32:20.564 } 00:32:20.564 ] 00:32:20.564 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:20.564 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:20.564 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:20.824 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:20.824 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:20.824 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:21.083 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:21.084 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:21.084 [2024-10-16 07:15:20.555469] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:21.344 request: 00:32:21.344 { 00:32:21.344 "uuid": "796eb69b-318a-4c85-b48b-19e5c32380c6", 00:32:21.344 "method": "bdev_lvol_get_lvstores", 00:32:21.344 "req_id": 1 00:32:21.344 } 00:32:21.344 Got JSON-RPC error response 00:32:21.344 response: 00:32:21.344 { 00:32:21.344 "code": -19, 00:32:21.344 "message": "No such device" 00:32:21.344 } 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:21.344 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:21.605 aio_bdev 00:32:21.605 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c09ae66d-7702-40ba-ac50-d43f09d05cda 00:32:21.605 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c09ae66d-7702-40ba-ac50-d43f09d05cda 00:32:21.605 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:21.605 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:21.605 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:21.605 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:21.605 07:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:21.866 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c09ae66d-7702-40ba-ac50-d43f09d05cda -t 2000 00:32:21.866 [ 00:32:21.866 { 00:32:21.866 "name": "c09ae66d-7702-40ba-ac50-d43f09d05cda", 00:32:21.866 "aliases": [ 00:32:21.866 "lvs/lvol" 00:32:21.866 ], 00:32:21.866 "product_name": "Logical Volume", 00:32:21.866 "block_size": 4096, 00:32:21.866 "num_blocks": 38912, 00:32:21.866 "uuid": "c09ae66d-7702-40ba-ac50-d43f09d05cda", 00:32:21.866 "assigned_rate_limits": { 00:32:21.866 "rw_ios_per_sec": 0, 00:32:21.866 "rw_mbytes_per_sec": 0, 00:32:21.866 "r_mbytes_per_sec": 0, 00:32:21.866 "w_mbytes_per_sec": 0 00:32:21.866 }, 00:32:21.866 "claimed": false, 00:32:21.866 "zoned": false, 00:32:21.866 "supported_io_types": { 00:32:21.866 "read": true, 00:32:21.866 "write": true, 00:32:21.866 "unmap": true, 00:32:21.866 "flush": false, 00:32:21.866 "reset": true, 00:32:21.866 "nvme_admin": false, 00:32:21.866 "nvme_io": false, 00:32:21.866 "nvme_io_md": false, 00:32:21.866 "write_zeroes": true, 00:32:21.866 "zcopy": false, 00:32:21.866 "get_zone_info": false, 00:32:21.866 "zone_management": false, 00:32:21.866 "zone_append": false, 00:32:21.866 "compare": false, 00:32:21.866 "compare_and_write": false, 00:32:21.866 "abort": false, 00:32:21.866 "seek_hole": true, 00:32:21.866 "seek_data": true, 00:32:21.866 "copy": false, 00:32:21.866 "nvme_iov_md": false 00:32:21.866 }, 00:32:21.866 "driver_specific": { 00:32:21.866 "lvol": { 00:32:21.866 "lvol_store_uuid": "796eb69b-318a-4c85-b48b-19e5c32380c6", 00:32:21.866 "base_bdev": "aio_bdev", 00:32:21.866 "thin_provision": false, 00:32:21.866 "num_allocated_clusters": 38, 00:32:21.866 "snapshot": false, 00:32:21.866 "clone": false, 00:32:21.866 "esnap_clone": false 00:32:21.866 } 00:32:21.866 } 00:32:21.866 } 00:32:21.866 ] 00:32:21.866 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:21.866 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:21.866 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:22.126 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:22.126 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:22.126 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:22.387 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:22.387 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c09ae66d-7702-40ba-ac50-d43f09d05cda 00:32:22.387 07:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 796eb69b-318a-4c85-b48b-19e5c32380c6 00:32:22.648 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:22.908 00:32:22.908 real 0m17.702s 00:32:22.908 user 0m35.460s 00:32:22.908 sys 0m3.247s 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 ************************************ 00:32:22.908 END TEST lvs_grow_dirty 00:32:22.908 ************************************ 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:22.908 nvmf_trace.0 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.908 rmmod nvme_tcp 00:32:22.908 rmmod nvme_fabrics 00:32:22.908 rmmod nvme_keyring 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3360036 ']' 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3360036 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3360036 ']' 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3360036 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:22.908 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3360036 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3360036' 00:32:23.169 killing process with pid 3360036 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3360036 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3360036 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.169 07:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.713 00:32:25.713 real 0m44.890s 00:32:25.713 user 0m53.916s 00:32:25.713 sys 0m10.815s 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:25.713 ************************************ 00:32:25.713 END TEST nvmf_lvs_grow 00:32:25.713 ************************************ 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.713 ************************************ 00:32:25.713 START TEST nvmf_bdev_io_wait 00:32:25.713 ************************************ 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:25.713 * Looking for test storage... 00:32:25.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:25.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.713 --rc genhtml_branch_coverage=1 00:32:25.713 --rc genhtml_function_coverage=1 00:32:25.713 --rc genhtml_legend=1 00:32:25.713 --rc geninfo_all_blocks=1 00:32:25.713 --rc geninfo_unexecuted_blocks=1 00:32:25.713 00:32:25.713 ' 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:25.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.713 --rc genhtml_branch_coverage=1 00:32:25.713 --rc genhtml_function_coverage=1 00:32:25.713 --rc genhtml_legend=1 00:32:25.713 --rc geninfo_all_blocks=1 00:32:25.713 --rc geninfo_unexecuted_blocks=1 00:32:25.713 00:32:25.713 ' 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:25.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.713 --rc genhtml_branch_coverage=1 00:32:25.713 --rc genhtml_function_coverage=1 00:32:25.713 --rc genhtml_legend=1 00:32:25.713 --rc geninfo_all_blocks=1 00:32:25.713 --rc geninfo_unexecuted_blocks=1 00:32:25.713 00:32:25.713 ' 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:25.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.713 --rc genhtml_branch_coverage=1 00:32:25.713 --rc genhtml_function_coverage=1 00:32:25.713 --rc genhtml_legend=1 00:32:25.713 --rc geninfo_all_blocks=1 00:32:25.713 --rc geninfo_unexecuted_blocks=1 00:32:25.713 00:32:25.713 ' 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.713 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:25.714 07:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:25.714 07:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.714 07:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:33.855 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:33.855 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.855 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:33.855 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:33.856 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:33.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:32:33.856 00:32:33.856 --- 10.0.0.2 ping statistics --- 00:32:33.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.856 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:32:33.856 00:32:33.856 --- 10.0.0.1 ping statistics --- 00:32:33.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.856 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3365042 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3365042 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3365042 ']' 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:33.856 07:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:33.856 [2024-10-16 07:15:32.553475] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:33.856 [2024-10-16 07:15:32.554617] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:32:33.856 [2024-10-16 07:15:32.554674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.856 [2024-10-16 07:15:32.644762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:33.856 [2024-10-16 07:15:32.699245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.856 [2024-10-16 07:15:32.699297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.856 [2024-10-16 07:15:32.699305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.856 [2024-10-16 07:15:32.699312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.856 [2024-10-16 07:15:32.699319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.856 [2024-10-16 07:15:32.701371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.856 [2024-10-16 07:15:32.701533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:33.856 [2024-10-16 07:15:32.701696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.856 [2024-10-16 07:15:32.701696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:33.856 [2024-10-16 07:15:32.702060] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.118 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.118 [2024-10-16 07:15:33.474008] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:34.118 [2024-10-16 07:15:33.474689] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:34.118 [2024-10-16 07:15:33.474749] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:34.118 [2024-10-16 07:15:33.474943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.119 [2024-10-16 07:15:33.486268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.119 Malloc0 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.119 [2024-10-16 07:15:33.558757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3365124 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3365126 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:34.119 { 00:32:34.119 "params": { 00:32:34.119 "name": "Nvme$subsystem", 00:32:34.119 "trtype": "$TEST_TRANSPORT", 00:32:34.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.119 "adrfam": "ipv4", 00:32:34.119 "trsvcid": "$NVMF_PORT", 00:32:34.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.119 "hdgst": ${hdgst:-false}, 00:32:34.119 "ddgst": ${ddgst:-false} 00:32:34.119 }, 00:32:34.119 "method": "bdev_nvme_attach_controller" 00:32:34.119 } 00:32:34.119 EOF 00:32:34.119 )") 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3365128 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:34.119 { 00:32:34.119 "params": { 00:32:34.119 "name": "Nvme$subsystem", 00:32:34.119 "trtype": "$TEST_TRANSPORT", 00:32:34.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.119 "adrfam": "ipv4", 00:32:34.119 "trsvcid": "$NVMF_PORT", 00:32:34.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.119 "hdgst": ${hdgst:-false}, 00:32:34.119 "ddgst": ${ddgst:-false} 00:32:34.119 }, 00:32:34.119 "method": "bdev_nvme_attach_controller" 00:32:34.119 } 00:32:34.119 EOF 00:32:34.119 )") 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3365131 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:34.119 { 00:32:34.119 "params": { 00:32:34.119 "name": "Nvme$subsystem", 00:32:34.119 "trtype": "$TEST_TRANSPORT", 00:32:34.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.119 "adrfam": "ipv4", 00:32:34.119 "trsvcid": "$NVMF_PORT", 00:32:34.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.119 "hdgst": ${hdgst:-false}, 00:32:34.119 "ddgst": ${ddgst:-false} 00:32:34.119 }, 00:32:34.119 "method": "bdev_nvme_attach_controller" 00:32:34.119 } 00:32:34.119 EOF 00:32:34.119 )") 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:34.119 { 00:32:34.119 "params": { 00:32:34.119 "name": "Nvme$subsystem", 00:32:34.119 "trtype": "$TEST_TRANSPORT", 00:32:34.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.119 "adrfam": "ipv4", 00:32:34.119 "trsvcid": "$NVMF_PORT", 00:32:34.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.119 "hdgst": ${hdgst:-false}, 00:32:34.119 "ddgst": ${ddgst:-false} 00:32:34.119 }, 00:32:34.119 "method": "bdev_nvme_attach_controller" 00:32:34.119 } 00:32:34.119 EOF 00:32:34.119 )") 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3365124 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:34.119 "params": { 00:32:34.119 "name": "Nvme1", 00:32:34.119 "trtype": "tcp", 00:32:34.119 "traddr": "10.0.0.2", 00:32:34.119 "adrfam": "ipv4", 00:32:34.119 "trsvcid": "4420", 00:32:34.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:34.119 "hdgst": false, 00:32:34.119 "ddgst": false 00:32:34.119 }, 00:32:34.119 "method": "bdev_nvme_attach_controller" 00:32:34.119 }' 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:34.119 "params": { 00:32:34.119 "name": "Nvme1", 00:32:34.119 "trtype": "tcp", 00:32:34.119 "traddr": "10.0.0.2", 00:32:34.119 "adrfam": "ipv4", 00:32:34.119 "trsvcid": "4420", 00:32:34.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:34.119 "hdgst": false, 00:32:34.119 "ddgst": false 00:32:34.119 }, 00:32:34.119 "method": "bdev_nvme_attach_controller" 00:32:34.119 }' 00:32:34.119 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:34.120 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:34.120 "params": { 00:32:34.120 "name": "Nvme1", 00:32:34.120 "trtype": "tcp", 00:32:34.120 "traddr": "10.0.0.2", 00:32:34.120 "adrfam": "ipv4", 00:32:34.120 "trsvcid": "4420", 00:32:34.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:34.120 "hdgst": false, 00:32:34.120 "ddgst": false 00:32:34.120 }, 00:32:34.120 "method": "bdev_nvme_attach_controller" 00:32:34.120 }' 00:32:34.120 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:34.120 07:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:34.120 "params": { 00:32:34.120 "name": "Nvme1", 00:32:34.120 "trtype": "tcp", 00:32:34.120 "traddr": "10.0.0.2", 00:32:34.120 "adrfam": "ipv4", 00:32:34.120 "trsvcid": "4420", 00:32:34.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:34.120 "hdgst": false, 00:32:34.120 "ddgst": false 00:32:34.120 }, 00:32:34.120 "method": "bdev_nvme_attach_controller" 00:32:34.120 }' 00:32:34.120 [2024-10-16 07:15:33.616115] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:32:34.120 [2024-10-16 07:15:33.616190] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:34.381 [2024-10-16 07:15:33.618011] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:32:34.381 [2024-10-16 07:15:33.618077] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:34.381 [2024-10-16 07:15:33.618395] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:32:34.381 [2024-10-16 07:15:33.618453] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:34.381 [2024-10-16 07:15:33.620987] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:32:34.381 [2024-10-16 07:15:33.621058] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:34.381 [2024-10-16 07:15:33.832855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.381 [2024-10-16 07:15:33.873711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:34.643 [2024-10-16 07:15:33.924234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.643 [2024-10-16 07:15:33.964664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:34.643 [2024-10-16 07:15:34.017322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.643 [2024-10-16 07:15:34.056700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:34.643 [2024-10-16 07:15:34.103819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.904 [2024-10-16 07:15:34.145684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:34.904 Running I/O for 1 seconds... 00:32:35.164 Running I/O for 1 seconds... 00:32:35.164 Running I/O for 1 seconds... 00:32:35.164 Running I/O for 1 seconds... 00:32:36.106 11139.00 IOPS, 43.51 MiB/s 00:32:36.106 Latency(us) 00:32:36.106 [2024-10-16T05:15:35.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.106 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:36.106 Nvme1n1 : 1.01 11179.94 43.67 0.00 0.00 11400.84 4833.28 14854.83 00:32:36.106 [2024-10-16T05:15:35.605Z] =================================================================================================================== 00:32:36.106 [2024-10-16T05:15:35.605Z] Total : 11179.94 43.67 0.00 0.00 11400.84 4833.28 14854.83 00:32:36.106 10597.00 IOPS, 41.39 MiB/s 00:32:36.106 Latency(us) 00:32:36.107 [2024-10-16T05:15:35.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.107 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:36.107 Nvme1n1 : 1.01 10671.03 41.68 0.00 0.00 11951.36 2867.20 16930.13 00:32:36.107 [2024-10-16T05:15:35.606Z] =================================================================================================================== 00:32:36.107 [2024-10-16T05:15:35.606Z] Total : 10671.03 41.68 0.00 0.00 11951.36 2867.20 16930.13 00:32:36.107 9419.00 IOPS, 36.79 MiB/s 00:32:36.107 Latency(us) 00:32:36.107 [2024-10-16T05:15:35.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.107 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:36.107 Nvme1n1 : 1.01 9502.89 37.12 0.00 0.00 13422.98 2143.57 21080.75 00:32:36.107 [2024-10-16T05:15:35.606Z] =================================================================================================================== 00:32:36.107 [2024-10-16T05:15:35.606Z] Total : 9502.89 37.12 0.00 0.00 13422.98 2143.57 21080.75 00:32:36.107 180008.00 IOPS, 703.16 MiB/s 00:32:36.107 Latency(us) 00:32:36.107 [2024-10-16T05:15:35.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.107 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:36.107 Nvme1n1 : 1.00 179653.46 701.77 0.00 0.00 708.48 303.79 1979.73 00:32:36.107 [2024-10-16T05:15:35.606Z] =================================================================================================================== 00:32:36.107 [2024-10-16T05:15:35.606Z] Total : 179653.46 701.77 0.00 0.00 708.48 303.79 1979.73 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3365126 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3365128 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3365131 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.107 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.107 rmmod nvme_tcp 00:32:36.368 rmmod nvme_fabrics 00:32:36.368 rmmod nvme_keyring 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3365042 ']' 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3365042 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3365042 ']' 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3365042 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3365042 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3365042' 00:32:36.368 killing process with pid 3365042 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3365042 00:32:36.368 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3365042 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.629 07:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.543 07:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:38.543 00:32:38.543 real 0m13.216s 00:32:38.543 user 0m16.542s 00:32:38.543 sys 0m7.819s 00:32:38.543 07:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:38.543 07:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:38.543 ************************************ 00:32:38.543 END TEST nvmf_bdev_io_wait 00:32:38.543 ************************************ 00:32:38.543 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:38.543 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:38.543 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:38.543 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:38.804 ************************************ 00:32:38.804 START TEST nvmf_queue_depth 00:32:38.804 ************************************ 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:38.804 * Looking for test storage... 00:32:38.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.804 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:38.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.805 --rc genhtml_branch_coverage=1 00:32:38.805 --rc genhtml_function_coverage=1 00:32:38.805 --rc genhtml_legend=1 00:32:38.805 --rc geninfo_all_blocks=1 00:32:38.805 --rc geninfo_unexecuted_blocks=1 00:32:38.805 00:32:38.805 ' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:38.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.805 --rc genhtml_branch_coverage=1 00:32:38.805 --rc genhtml_function_coverage=1 00:32:38.805 --rc genhtml_legend=1 00:32:38.805 --rc geninfo_all_blocks=1 00:32:38.805 --rc geninfo_unexecuted_blocks=1 00:32:38.805 00:32:38.805 ' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:38.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.805 --rc genhtml_branch_coverage=1 00:32:38.805 --rc genhtml_function_coverage=1 00:32:38.805 --rc genhtml_legend=1 00:32:38.805 --rc geninfo_all_blocks=1 00:32:38.805 --rc geninfo_unexecuted_blocks=1 00:32:38.805 00:32:38.805 ' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:38.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.805 --rc genhtml_branch_coverage=1 00:32:38.805 --rc genhtml_function_coverage=1 00:32:38.805 --rc genhtml_legend=1 00:32:38.805 --rc geninfo_all_blocks=1 00:32:38.805 --rc geninfo_unexecuted_blocks=1 00:32:38.805 00:32:38.805 ' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.805 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.067 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:39.067 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:39.067 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.067 07:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:47.214 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:47.214 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.214 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:47.215 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:47.215 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:32:47.215 00:32:47.215 --- 10.0.0.2 ping statistics --- 00:32:47.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.215 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:32:47.215 00:32:47.215 --- 10.0.0.1 ping statistics --- 00:32:47.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.215 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3369816 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3369816 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3369816 ']' 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:47.215 07:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.215 [2024-10-16 07:15:45.871626] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:47.215 [2024-10-16 07:15:45.872750] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:32:47.215 [2024-10-16 07:15:45.872801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.215 [2024-10-16 07:15:45.966791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.215 [2024-10-16 07:15:46.016882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.215 [2024-10-16 07:15:46.016937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.215 [2024-10-16 07:15:46.016946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.215 [2024-10-16 07:15:46.016953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.215 [2024-10-16 07:15:46.016959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.215 [2024-10-16 07:15:46.017766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.215 [2024-10-16 07:15:46.093593] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:47.215 [2024-10-16 07:15:46.093919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:47.215 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.215 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:47.215 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:47.215 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:47.215 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 [2024-10-16 07:15:46.758647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 Malloc0 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 [2024-10-16 07:15:46.842862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3369978 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3369978 /var/tmp/bdevperf.sock 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3369978 ']' 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:47.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:47.478 07:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.478 [2024-10-16 07:15:46.902242] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:32:47.478 [2024-10-16 07:15:46.902319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369978 ] 00:32:47.740 [2024-10-16 07:15:46.984348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.740 [2024-10-16 07:15:47.039255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.312 07:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:48.312 07:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:48.312 07:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:48.312 07:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.312 07:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:48.573 NVMe0n1 00:32:48.573 07:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.573 07:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:48.573 Running I/O for 10 seconds... 00:32:50.905 8621.00 IOPS, 33.68 MiB/s [2024-10-16T05:15:51.346Z] 8924.00 IOPS, 34.86 MiB/s [2024-10-16T05:15:52.288Z] 9929.67 IOPS, 38.79 MiB/s [2024-10-16T05:15:53.230Z] 10952.25 IOPS, 42.78 MiB/s [2024-10-16T05:15:54.172Z] 11477.00 IOPS, 44.83 MiB/s [2024-10-16T05:15:55.114Z] 11859.50 IOPS, 46.33 MiB/s [2024-10-16T05:15:56.499Z] 12151.43 IOPS, 47.47 MiB/s [2024-10-16T05:15:57.069Z] 12406.25 IOPS, 48.46 MiB/s [2024-10-16T05:15:58.455Z] 12540.00 IOPS, 48.98 MiB/s [2024-10-16T05:15:58.455Z] 12709.80 IOPS, 49.65 MiB/s 00:32:58.956 Latency(us) 00:32:58.956 [2024-10-16T05:15:58.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.956 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:58.956 Verification LBA range: start 0x0 length 0x4000 00:32:58.956 NVMe0n1 : 10.05 12740.77 49.77 0.00 0.00 80097.49 19005.44 69468.16 00:32:58.956 [2024-10-16T05:15:58.455Z] =================================================================================================================== 00:32:58.956 [2024-10-16T05:15:58.455Z] Total : 12740.77 49.77 0.00 0.00 80097.49 19005.44 69468.16 00:32:58.956 { 00:32:58.956 "results": [ 00:32:58.956 { 00:32:58.956 "job": "NVMe0n1", 00:32:58.956 "core_mask": "0x1", 00:32:58.956 "workload": "verify", 00:32:58.956 "status": "finished", 00:32:58.956 "verify_range": { 00:32:58.956 "start": 0, 00:32:58.956 "length": 16384 00:32:58.956 }, 00:32:58.956 "queue_depth": 1024, 00:32:58.956 "io_size": 4096, 00:32:58.956 "runtime": 10.053867, 00:32:58.956 "iops": 12740.769298022344, 00:32:58.956 "mibps": 49.76863007039978, 00:32:58.956 "io_failed": 0, 00:32:58.957 "io_timeout": 0, 00:32:58.957 "avg_latency_us": 80097.48731442014, 00:32:58.957 "min_latency_us": 19005.44, 00:32:58.957 "max_latency_us": 69468.16 00:32:58.957 } 00:32:58.957 ], 00:32:58.957 "core_count": 1 00:32:58.957 } 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3369978 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3369978 ']' 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3369978 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3369978 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3369978' 00:32:58.957 killing process with pid 3369978 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3369978 00:32:58.957 Received shutdown signal, test time was about 10.000000 seconds 00:32:58.957 00:32:58.957 Latency(us) 00:32:58.957 [2024-10-16T05:15:58.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.957 [2024-10-16T05:15:58.456Z] =================================================================================================================== 00:32:58.957 [2024-10-16T05:15:58.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3369978 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:58.957 rmmod nvme_tcp 00:32:58.957 rmmod nvme_fabrics 00:32:58.957 rmmod nvme_keyring 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3369816 ']' 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3369816 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3369816 ']' 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3369816 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3369816 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3369816' 00:32:58.957 killing process with pid 3369816 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3369816 00:32:58.957 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3369816 00:32:59.217 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:59.217 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:59.217 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:59.217 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:59.217 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:32:59.217 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:59.217 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:32:59.218 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.218 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.218 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.218 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.218 07:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:01.766 00:33:01.766 real 0m22.584s 00:33:01.766 user 0m24.838s 00:33:01.766 sys 0m7.439s 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:01.766 ************************************ 00:33:01.766 END TEST nvmf_queue_depth 00:33:01.766 ************************************ 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:01.766 ************************************ 00:33:01.766 START TEST nvmf_target_multipath 00:33:01.766 ************************************ 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:01.766 * Looking for test storage... 00:33:01.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:01.766 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:01.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.767 --rc genhtml_branch_coverage=1 00:33:01.767 --rc genhtml_function_coverage=1 00:33:01.767 --rc genhtml_legend=1 00:33:01.767 --rc geninfo_all_blocks=1 00:33:01.767 --rc geninfo_unexecuted_blocks=1 00:33:01.767 00:33:01.767 ' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:01.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.767 --rc genhtml_branch_coverage=1 00:33:01.767 --rc genhtml_function_coverage=1 00:33:01.767 --rc genhtml_legend=1 00:33:01.767 --rc geninfo_all_blocks=1 00:33:01.767 --rc geninfo_unexecuted_blocks=1 00:33:01.767 00:33:01.767 ' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:01.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.767 --rc genhtml_branch_coverage=1 00:33:01.767 --rc genhtml_function_coverage=1 00:33:01.767 --rc genhtml_legend=1 00:33:01.767 --rc geninfo_all_blocks=1 00:33:01.767 --rc geninfo_unexecuted_blocks=1 00:33:01.767 00:33:01.767 ' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:01.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.767 --rc genhtml_branch_coverage=1 00:33:01.767 --rc genhtml_function_coverage=1 00:33:01.767 --rc genhtml_legend=1 00:33:01.767 --rc geninfo_all_blocks=1 00:33:01.767 --rc geninfo_unexecuted_blocks=1 00:33:01.767 00:33:01.767 ' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:01.767 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.768 07:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:09.914 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:09.914 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:09.914 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.914 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:09.915 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:09.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:33:09.915 00:33:09.915 --- 10.0.0.2 ping statistics --- 00:33:09.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.915 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:33:09.915 00:33:09.915 --- 10.0.0.1 ping statistics --- 00:33:09.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.915 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:09.915 only one NIC for nvmf test 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:09.915 rmmod nvme_tcp 00:33:09.915 rmmod nvme_fabrics 00:33:09.915 rmmod nvme_keyring 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.915 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.916 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.916 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.916 07:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.304 00:33:11.304 real 0m9.913s 00:33:11.304 user 0m2.223s 00:33:11.304 sys 0m5.637s 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:11.304 ************************************ 00:33:11.304 END TEST nvmf_target_multipath 00:33:11.304 ************************************ 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:11.304 ************************************ 00:33:11.304 START TEST nvmf_zcopy 00:33:11.304 ************************************ 00:33:11.304 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:11.566 * Looking for test storage... 00:33:11.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:11.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.566 --rc genhtml_branch_coverage=1 00:33:11.566 --rc genhtml_function_coverage=1 00:33:11.566 --rc genhtml_legend=1 00:33:11.566 --rc geninfo_all_blocks=1 00:33:11.566 --rc geninfo_unexecuted_blocks=1 00:33:11.566 00:33:11.566 ' 00:33:11.566 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:11.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.567 --rc genhtml_branch_coverage=1 00:33:11.567 --rc genhtml_function_coverage=1 00:33:11.567 --rc genhtml_legend=1 00:33:11.567 --rc geninfo_all_blocks=1 00:33:11.567 --rc geninfo_unexecuted_blocks=1 00:33:11.567 00:33:11.567 ' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.567 --rc genhtml_branch_coverage=1 00:33:11.567 --rc genhtml_function_coverage=1 00:33:11.567 --rc genhtml_legend=1 00:33:11.567 --rc geninfo_all_blocks=1 00:33:11.567 --rc geninfo_unexecuted_blocks=1 00:33:11.567 00:33:11.567 ' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.567 --rc genhtml_branch_coverage=1 00:33:11.567 --rc genhtml_function_coverage=1 00:33:11.567 --rc genhtml_legend=1 00:33:11.567 --rc geninfo_all_blocks=1 00:33:11.567 --rc geninfo_unexecuted_blocks=1 00:33:11.567 00:33:11.567 ' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:11.567 07:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:19.827 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:19.828 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:19.828 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:19.828 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:19.828 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:19.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.753 ms 00:33:19.828 00:33:19.828 --- 10.0.0.2 ping statistics --- 00:33:19.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.828 rtt min/avg/max/mdev = 0.753/0.753/0.753/0.000 ms 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:33:19.828 00:33:19.828 --- 10.0.0.1 ping statistics --- 00:33:19.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.828 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:19.828 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:19.829 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3380494 00:33:19.829 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3380494 00:33:19.829 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:19.829 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3380494 ']' 00:33:19.829 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.829 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:19.829 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.829 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:19.829 07:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:19.829 [2024-10-16 07:16:18.489205] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:19.829 [2024-10-16 07:16:18.490327] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:33:19.829 [2024-10-16 07:16:18.490381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.829 [2024-10-16 07:16:18.579418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.829 [2024-10-16 07:16:18.629624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.829 [2024-10-16 07:16:18.629674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.829 [2024-10-16 07:16:18.629682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.829 [2024-10-16 07:16:18.629689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.829 [2024-10-16 07:16:18.629696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.829 [2024-10-16 07:16:18.630473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.829 [2024-10-16 07:16:18.707622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:19.829 [2024-10-16 07:16:18.707941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:19.829 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.829 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:33:19.829 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:19.829 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:19.829 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.091 [2024-10-16 07:16:19.351338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.091 [2024-10-16 07:16:19.379651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.091 malloc0 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:20.091 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:20.091 { 00:33:20.091 "params": { 00:33:20.091 "name": "Nvme$subsystem", 00:33:20.091 "trtype": "$TEST_TRANSPORT", 00:33:20.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.091 "adrfam": "ipv4", 00:33:20.091 "trsvcid": "$NVMF_PORT", 00:33:20.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.091 "hdgst": ${hdgst:-false}, 00:33:20.091 "ddgst": ${ddgst:-false} 00:33:20.092 }, 00:33:20.092 "method": "bdev_nvme_attach_controller" 00:33:20.092 } 00:33:20.092 EOF 00:33:20.092 )") 00:33:20.092 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:20.092 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:20.092 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:20.092 07:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:20.092 "params": { 00:33:20.092 "name": "Nvme1", 00:33:20.092 "trtype": "tcp", 00:33:20.092 "traddr": "10.0.0.2", 00:33:20.092 "adrfam": "ipv4", 00:33:20.092 "trsvcid": "4420", 00:33:20.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:20.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:20.092 "hdgst": false, 00:33:20.092 "ddgst": false 00:33:20.092 }, 00:33:20.092 "method": "bdev_nvme_attach_controller" 00:33:20.092 }' 00:33:20.092 [2024-10-16 07:16:19.484365] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:33:20.092 [2024-10-16 07:16:19.484437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380570 ] 00:33:20.092 [2024-10-16 07:16:19.566480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.353 [2024-10-16 07:16:19.619934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.353 Running I/O for 10 seconds... 00:33:22.687 6378.00 IOPS, 49.83 MiB/s [2024-10-16T05:16:23.130Z] 6430.50 IOPS, 50.24 MiB/s [2024-10-16T05:16:24.074Z] 6443.33 IOPS, 50.34 MiB/s [2024-10-16T05:16:25.015Z] 6454.75 IOPS, 50.43 MiB/s [2024-10-16T05:16:25.958Z] 6758.20 IOPS, 52.80 MiB/s [2024-10-16T05:16:26.899Z] 7241.17 IOPS, 56.57 MiB/s [2024-10-16T05:16:28.282Z] 7582.43 IOPS, 59.24 MiB/s [2024-10-16T05:16:28.853Z] 7839.12 IOPS, 61.24 MiB/s [2024-10-16T05:16:30.236Z] 8039.33 IOPS, 62.81 MiB/s [2024-10-16T05:16:30.236Z] 8199.90 IOPS, 64.06 MiB/s 00:33:30.737 Latency(us) 00:33:30.737 [2024-10-16T05:16:30.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.737 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:30.737 Verification LBA range: start 0x0 length 0x1000 00:33:30.737 Nvme1n1 : 10.01 8202.11 64.08 0.00 0.00 15559.14 1372.16 28398.93 00:33:30.737 [2024-10-16T05:16:30.236Z] =================================================================================================================== 00:33:30.737 [2024-10-16T05:16:30.236Z] Total : 8202.11 64.08 0.00 0.00 15559.14 1372.16 28398.93 00:33:30.737 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3382541 00:33:30.737 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:30.737 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.737 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:30.737 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:30.737 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:30.737 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:30.737 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:30.738 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:30.738 { 00:33:30.738 "params": { 00:33:30.738 "name": "Nvme$subsystem", 00:33:30.738 "trtype": "$TEST_TRANSPORT", 00:33:30.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.738 "adrfam": "ipv4", 00:33:30.738 "trsvcid": "$NVMF_PORT", 00:33:30.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.738 "hdgst": ${hdgst:-false}, 00:33:30.738 "ddgst": ${ddgst:-false} 00:33:30.738 }, 00:33:30.738 "method": "bdev_nvme_attach_controller" 00:33:30.738 } 00:33:30.738 EOF 00:33:30.738 )") 00:33:30.738 [2024-10-16 07:16:29.962878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:29.962910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:30.738 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:30.738 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:30.738 07:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:30.738 "params": { 00:33:30.738 "name": "Nvme1", 00:33:30.738 "trtype": "tcp", 00:33:30.738 "traddr": "10.0.0.2", 00:33:30.738 "adrfam": "ipv4", 00:33:30.738 "trsvcid": "4420", 00:33:30.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:30.738 "hdgst": false, 00:33:30.738 "ddgst": false 00:33:30.738 }, 00:33:30.738 "method": "bdev_nvme_attach_controller" 00:33:30.738 }' 00:33:30.738 [2024-10-16 07:16:29.974838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:29.974852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:29.986836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:29.986848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:29.998836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:29.998853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.010152] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:33:30.738 [2024-10-16 07:16:30.010209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382541 ] 00:33:30.738 [2024-10-16 07:16:30.010842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.010858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.022836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.022848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.034837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.034850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.046835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.046848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.058836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.058849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.070835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.070851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.082836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.082856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.087864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.738 [2024-10-16 07:16:30.094837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.094854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.106836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.106851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.118331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.738 [2024-10-16 07:16:30.118835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.118851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.130841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.130857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.142842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.142859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.154838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.154854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.166838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.166851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.178836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.178850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.190852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.190869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.202841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.202858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.214837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.214852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.738 [2024-10-16 07:16:30.226835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.738 [2024-10-16 07:16:30.226848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.238835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.238848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.250835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.250847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.262837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.262852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.274837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.274851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.321333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.321347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.330837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.330856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 Running I/O for 5 seconds... 00:33:30.999 [2024-10-16 07:16:30.345897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.345914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.359169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.359189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.373651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.373668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.386761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.999 [2024-10-16 07:16:30.386778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.999 [2024-10-16 07:16:30.399340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.000 [2024-10-16 07:16:30.399355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.000 [2024-10-16 07:16:30.413860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.000 [2024-10-16 07:16:30.413875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.000 [2024-10-16 07:16:30.426718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.000 [2024-10-16 07:16:30.426734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.000 [2024-10-16 07:16:30.439252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.000 [2024-10-16 07:16:30.439268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.000 [2024-10-16 07:16:30.453661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.000 [2024-10-16 07:16:30.453676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.000 [2024-10-16 07:16:30.466627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.000 [2024-10-16 07:16:30.466644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.000 [2024-10-16 07:16:30.479272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.000 [2024-10-16 07:16:30.479288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.000 [2024-10-16 07:16:30.494138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.000 [2024-10-16 07:16:30.494155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.261 [2024-10-16 07:16:30.507393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.261 [2024-10-16 07:16:30.507408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.261 [2024-10-16 07:16:30.521916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.261 [2024-10-16 07:16:30.521932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.261 [2024-10-16 07:16:30.534947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.261 [2024-10-16 07:16:30.534962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.261 [2024-10-16 07:16:30.546573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.261 [2024-10-16 07:16:30.546589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.261 [2024-10-16 07:16:30.559142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.261 [2024-10-16 07:16:30.559157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.261 [2024-10-16 07:16:30.573959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.261 [2024-10-16 07:16:30.573974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.261 [2024-10-16 07:16:30.586790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.261 [2024-10-16 07:16:30.586810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.261 [2024-10-16 07:16:30.598872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.598887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.611236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.611251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.626228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.626243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.639086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.639100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.654025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.654040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.666787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.666802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.679393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.679408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.694135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.694151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.706790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.706806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.719145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.719160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.733189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.733204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.746118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.746134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.262 [2024-10-16 07:16:30.758561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.262 [2024-10-16 07:16:30.758576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.771373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.771390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.786351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.786367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.799385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.799401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.813954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.813969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.826914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.826930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.839010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.839028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.850715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.850730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.863249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.863264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.878199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.878214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.891108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.891122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.905994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.906008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.919242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.919257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.933745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.933760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.946418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.946433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.959209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.959224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.973914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.973930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.986855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.986870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:30.998999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:30.999014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.524 [2024-10-16 07:16:31.010683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.524 [2024-10-16 07:16:31.010698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.023408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.023424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.038301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.038317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.051152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.051167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.066418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.066433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.079165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.079179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.094221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.094236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.106935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.106951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.119439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.119454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.134222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.134238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.147329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.147344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.162122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.162137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.175222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.175237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.190376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.190391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.202861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.202876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.215552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.215567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.230262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.230278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.242638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.242653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.255224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.255238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.270171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.270186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.785 [2024-10-16 07:16:31.282782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.785 [2024-10-16 07:16:31.282798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.294779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.294795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.307742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.307756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.322516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.322531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.335356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.335371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 18873.00 IOPS, 147.45 MiB/s [2024-10-16T05:16:31.545Z] [2024-10-16 07:16:31.350235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.350250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.363678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.363692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.378653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.378668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.391279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.391293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.406673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.406689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.419340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.419355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.434272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.434286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.446999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.447013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.459038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.046 [2024-10-16 07:16:31.459053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.046 [2024-10-16 07:16:31.471628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.047 [2024-10-16 07:16:31.471643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.047 [2024-10-16 07:16:31.485821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.047 [2024-10-16 07:16:31.485837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.047 [2024-10-16 07:16:31.498704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.047 [2024-10-16 07:16:31.498719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.047 [2024-10-16 07:16:31.511602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.047 [2024-10-16 07:16:31.511616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.047 [2024-10-16 07:16:31.526562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.047 [2024-10-16 07:16:31.526577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.047 [2024-10-16 07:16:31.538846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.047 [2024-10-16 07:16:31.538861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.307 [2024-10-16 07:16:31.551421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.307 [2024-10-16 07:16:31.551436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.307 [2024-10-16 07:16:31.565772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.307 [2024-10-16 07:16:31.565787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.307 [2024-10-16 07:16:31.578247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.307 [2024-10-16 07:16:31.578262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.307 [2024-10-16 07:16:31.590932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.307 [2024-10-16 07:16:31.590952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.307 [2024-10-16 07:16:31.602856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.307 [2024-10-16 07:16:31.602871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.307 [2024-10-16 07:16:31.615511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.615525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.630407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.630422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.643074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.643089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.658120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.658135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.671402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.671416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.686531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.686546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.699283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.699298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.714260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.714275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.727259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.727274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.742450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.742465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.755119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.755133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.770175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.770190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.782960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.782976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.308 [2024-10-16 07:16:31.795582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.308 [2024-10-16 07:16:31.795596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.568 [2024-10-16 07:16:31.810623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.810639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.823317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.823333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.838347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.838362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.850969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.850988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.866628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.866644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.879308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.879323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.894302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.894318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.906523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.906539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.919595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.919609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.933880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.933895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.946974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.946990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.959741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.959756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.974415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.974431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:31.987198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:31.987212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:32.002224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:32.002240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:32.015015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:32.015030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:32.029940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:32.029956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:32.043068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:32.043083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.569 [2024-10-16 07:16:32.055181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.569 [2024-10-16 07:16:32.055196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.830 [2024-10-16 07:16:32.069971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.830 [2024-10-16 07:16:32.069987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.830 [2024-10-16 07:16:32.082549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.830 [2024-10-16 07:16:32.082564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.830 [2024-10-16 07:16:32.095060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.830 [2024-10-16 07:16:32.095075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.830 [2024-10-16 07:16:32.110145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.830 [2024-10-16 07:16:32.110165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.830 [2024-10-16 07:16:32.122854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.830 [2024-10-16 07:16:32.122869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.830 [2024-10-16 07:16:32.135136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.830 [2024-10-16 07:16:32.135151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.830 [2024-10-16 07:16:32.149884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.149899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.162932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.162947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.174888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.174904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.187762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.187778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.201939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.201955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.214888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.214903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.227587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.227602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.242032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.242048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.255343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.255357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.269783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.269799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.282749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.282765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.294593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.294608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.307067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.307082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.831 [2024-10-16 07:16:32.322030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.831 [2024-10-16 07:16:32.322045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.335151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.335166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 18888.50 IOPS, 147.57 MiB/s [2024-10-16T05:16:32.591Z] [2024-10-16 07:16:32.349639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.349654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.362746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.362761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.374999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.375013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.390314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.390330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.403054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.403069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.417677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.417692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.430601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.430616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.443027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.443042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.457973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.457988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.471315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.471330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.486247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.486263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.498832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.498861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.510711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.510726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.523409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.523424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.538119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.538135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.551220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.551235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.566147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.566162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.578850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.578866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.092 [2024-10-16 07:16:32.591138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.092 [2024-10-16 07:16:32.591153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.353 [2024-10-16 07:16:32.606198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.353 [2024-10-16 07:16:32.606213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.353 [2024-10-16 07:16:32.619398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.353 [2024-10-16 07:16:32.619413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.353 [2024-10-16 07:16:32.634389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.353 [2024-10-16 07:16:32.634404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.353 [2024-10-16 07:16:32.647423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.353 [2024-10-16 07:16:32.647437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.353 [2024-10-16 07:16:32.661951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.353 [2024-10-16 07:16:32.661966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.353 [2024-10-16 07:16:32.674956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.353 [2024-10-16 07:16:32.674971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.687375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.687391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.702260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.702276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.715242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.715257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.730357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.730372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.743025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.743047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.755478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.755493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.770552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.770567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.783563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.783577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.797439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.797454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.810975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.810989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.825891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.825906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.838792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.838806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.354 [2024-10-16 07:16:32.851261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.354 [2024-10-16 07:16:32.851276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.863913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.863928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.877718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.877733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.889989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.890004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.902795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.902809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.915465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.915480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.930051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.930067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.943132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.943147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.955687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.955702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.969683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.969698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.982575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.982591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:32.994861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:32.994876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:33.007283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:33.007298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:33.022348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:33.022364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:33.034976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:33.034990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:33.049917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:33.049932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:33.063118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:33.063133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:33.077862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:33.077877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:33.090684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:33.090699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.615 [2024-10-16 07:16:33.103555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.615 [2024-10-16 07:16:33.103569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.117943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.117958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.130966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.130981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.143477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.143492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.158374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.158389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.171343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.171359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.186042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.186058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.199033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.199048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.211215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.211230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.226108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.226123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.238970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.238985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.253830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.253849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.266741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.266756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.278767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.278782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.291469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.291484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.306251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.306266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.319295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.319309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.334570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.334585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 18908.67 IOPS, 147.72 MiB/s [2024-10-16T05:16:33.375Z] [2024-10-16 07:16:33.346915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.346931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.359860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.359875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.876 [2024-10-16 07:16:33.374288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.876 [2024-10-16 07:16:33.374306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.387243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.387257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.402211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.402226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.414926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.414941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.427198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.427212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.441982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.441997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.454953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.454968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.467811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.467825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.481761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.481775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.494682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.494696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.506954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.506969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.519596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.519611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.534248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.534263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.546958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.546973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.558998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.559012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.574052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.574067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.586648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.586663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.599010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.599024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.611336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.611351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.137 [2024-10-16 07:16:33.626462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.137 [2024-10-16 07:16:33.626482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.639833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.639853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.653694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.653710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.666761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.666777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.679444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.679461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.694312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.694327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.707343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.707359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.721916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.721932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.734598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.734614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.747374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.747389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.762207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.762223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.774898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.774914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.786967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.786982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.799849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.799865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.813458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.813474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.826275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.826291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.838580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.838595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.851140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.851155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.865936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.865951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.879168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.879188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.398 [2024-10-16 07:16:33.894421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.398 [2024-10-16 07:16:33.894437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:33.907197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:33.907213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:33.922027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:33.922043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:33.935227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:33.935241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:33.947154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:33.947169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:33.959759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:33.959774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:33.973678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:33.973694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:33.986385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:33.986402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:33.999469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:33.999484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.014659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.014675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.027313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.027328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.041984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.042000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.054979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.054994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.067247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.067261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.079499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.079514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.094586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.094602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.107383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.107398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.122170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.122186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.135269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.135284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.658 [2024-10-16 07:16:34.150467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.658 [2024-10-16 07:16:34.150483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.917 [2024-10-16 07:16:34.163361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.917 [2024-10-16 07:16:34.163376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.917 [2024-10-16 07:16:34.178354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.917 [2024-10-16 07:16:34.178370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.917 [2024-10-16 07:16:34.191417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.917 [2024-10-16 07:16:34.191432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.917 [2024-10-16 07:16:34.206247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.917 [2024-10-16 07:16:34.206263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.917 [2024-10-16 07:16:34.219605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.917 [2024-10-16 07:16:34.219620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.917 [2024-10-16 07:16:34.234469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.917 [2024-10-16 07:16:34.234484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.917 [2024-10-16 07:16:34.247507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.917 [2024-10-16 07:16:34.247522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.917 [2024-10-16 07:16:34.262347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.262363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.275103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.275118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.290251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.290266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.302680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.302695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.315683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.315698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.329824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.329839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.343027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.343042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 18902.25 IOPS, 147.67 MiB/s [2024-10-16T05:16:34.417Z] [2024-10-16 07:16:34.355954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.355969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.370399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.370414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.383369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.383383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.398343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.398358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.918 [2024-10-16 07:16:34.411369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.918 [2024-10-16 07:16:34.411383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.426075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.426091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.439163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.439177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.451393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.451408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.465943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.465958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.478677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.478692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.491136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.491150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.505961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.505976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.518770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.518786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.531314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.531329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.546175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.546190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.558961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.558975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.573415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.573431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.586351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.586366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.599140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.599154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.614416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.614431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.627122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.627136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.642139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.642158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.654981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.654995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.177 [2024-10-16 07:16:34.670009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.177 [2024-10-16 07:16:34.670023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.682859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.682875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.694990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.695005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.707635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.707649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.721765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.721780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.734364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.734378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.747142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.747156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.761768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.761783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.774795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.774809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.787430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.787445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.802165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.802180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.814839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.814858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.827227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.827241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.842329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.842343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.437 [2024-10-16 07:16:34.854830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.437 [2024-10-16 07:16:34.854851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.438 [2024-10-16 07:16:34.867562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.438 [2024-10-16 07:16:34.867577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.438 [2024-10-16 07:16:34.881618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.438 [2024-10-16 07:16:34.881634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.438 [2024-10-16 07:16:34.894417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.438 [2024-10-16 07:16:34.894437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.438 [2024-10-16 07:16:34.907532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.438 [2024-10-16 07:16:34.907547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.438 [2024-10-16 07:16:34.922036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.438 [2024-10-16 07:16:34.922051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.438 [2024-10-16 07:16:34.934930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.438 [2024-10-16 07:16:34.934945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:34.947269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:34.947284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:34.962107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:34.962122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:34.974723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:34.974738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:34.986611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:34.986626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:34.999488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:34.999502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.014327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.014343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.027145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.027159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.042369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.042384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.055548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.055562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.070374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.070389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.082915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.082930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.095251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.095265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.108140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.108155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.122280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.122295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.134842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.134860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.147338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.147360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.161853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.161868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.175061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.175077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.698 [2024-10-16 07:16:35.189732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.698 [2024-10-16 07:16:35.189748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.203193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.203209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.215773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.215788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.230019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.230035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.243042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.243058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.254662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.254677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.267498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.267512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.282336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.282351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.295156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.295170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.310179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.310194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.323202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.323216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.338237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.338252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 18909.60 IOPS, 147.73 MiB/s [2024-10-16T05:16:35.457Z] [2024-10-16 07:16:35.350318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.350334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 00:33:35.958 Latency(us) 00:33:35.958 [2024-10-16T05:16:35.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.958 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:35.958 Nvme1n1 : 5.01 18910.64 147.74 0.00 0.00 6762.46 2839.89 12069.55 00:33:35.958 [2024-10-16T05:16:35.457Z] =================================================================================================================== 00:33:35.958 [2024-10-16T05:16:35.457Z] Total : 18910.64 147.74 0.00 0.00 6762.46 2839.89 12069.55 00:33:35.958 [2024-10-16 07:16:35.358840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.358862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.370851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.370865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.382848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.382862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.394849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.394862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.406839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.406855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.418837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.418853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.430837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.430852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.442839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.442854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.958 [2024-10-16 07:16:35.454836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.958 [2024-10-16 07:16:35.454848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3382541) - No such process 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3382541 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:36.218 delay0 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.218 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:36.219 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.219 07:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:36.219 [2024-10-16 07:16:35.656013] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:44.361 Initializing NVMe Controllers 00:33:44.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:44.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:44.361 Initialization complete. Launching workers. 00:33:44.361 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 291, failed: 15259 00:33:44.361 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15483, failed to submit 67 00:33:44.361 success 15334, unsuccessful 149, failed 0 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:44.361 rmmod nvme_tcp 00:33:44.361 rmmod nvme_fabrics 00:33:44.361 rmmod nvme_keyring 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3380494 ']' 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3380494 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3380494 ']' 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3380494 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3380494 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3380494' 00:33:44.361 killing process with pid 3380494 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3380494 00:33:44.361 07:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3380494 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.361 07:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.747 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.747 00:33:45.747 real 0m34.384s 00:33:45.747 user 0m43.267s 00:33:45.747 sys 0m13.138s 00:33:45.747 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:45.747 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.747 ************************************ 00:33:45.747 END TEST nvmf_zcopy 00:33:45.747 ************************************ 00:33:45.747 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:45.747 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:45.747 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:45.747 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:45.747 ************************************ 00:33:45.747 START TEST nvmf_nmic 00:33:45.747 ************************************ 00:33:45.747 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:46.008 * Looking for test storage... 00:33:46.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:46.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.008 --rc genhtml_branch_coverage=1 00:33:46.008 --rc genhtml_function_coverage=1 00:33:46.008 --rc genhtml_legend=1 00:33:46.008 --rc geninfo_all_blocks=1 00:33:46.008 --rc geninfo_unexecuted_blocks=1 00:33:46.008 00:33:46.008 ' 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:46.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.008 --rc genhtml_branch_coverage=1 00:33:46.008 --rc genhtml_function_coverage=1 00:33:46.008 --rc genhtml_legend=1 00:33:46.008 --rc geninfo_all_blocks=1 00:33:46.008 --rc geninfo_unexecuted_blocks=1 00:33:46.008 00:33:46.008 ' 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:46.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.008 --rc genhtml_branch_coverage=1 00:33:46.008 --rc genhtml_function_coverage=1 00:33:46.008 --rc genhtml_legend=1 00:33:46.008 --rc geninfo_all_blocks=1 00:33:46.008 --rc geninfo_unexecuted_blocks=1 00:33:46.008 00:33:46.008 ' 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:46.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.008 --rc genhtml_branch_coverage=1 00:33:46.008 --rc genhtml_function_coverage=1 00:33:46.008 --rc genhtml_legend=1 00:33:46.008 --rc geninfo_all_blocks=1 00:33:46.008 --rc geninfo_unexecuted_blocks=1 00:33:46.008 00:33:46.008 ' 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.008 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:46.009 07:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:54.169 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:54.169 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:54.169 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.169 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:54.170 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:54.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:33:54.170 00:33:54.170 --- 10.0.0.2 ping statistics --- 00:33:54.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.170 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:33:54.170 00:33:54.170 --- 10.0.0.1 ping statistics --- 00:33:54.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.170 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3389205 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3389205 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3389205 ']' 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:54.170 07:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.170 [2024-10-16 07:16:52.955121] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:54.170 [2024-10-16 07:16:52.956240] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:33:54.170 [2024-10-16 07:16:52.956291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.170 [2024-10-16 07:16:53.023464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:54.170 [2024-10-16 07:16:53.072268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.170 [2024-10-16 07:16:53.072323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.170 [2024-10-16 07:16:53.072331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.170 [2024-10-16 07:16:53.072337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.170 [2024-10-16 07:16:53.072341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.170 [2024-10-16 07:16:53.074430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.170 [2024-10-16 07:16:53.074559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.170 [2024-10-16 07:16:53.074721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.170 [2024-10-16 07:16:53.074722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.170 [2024-10-16 07:16:53.146221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:54.170 [2024-10-16 07:16:53.146677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:54.170 [2024-10-16 07:16:53.147506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:54.170 [2024-10-16 07:16:53.147825] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:54.170 [2024-10-16 07:16:53.147994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.170 [2024-10-16 07:16:53.235208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.170 Malloc0 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.170 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 [2024-10-16 07:16:53.323953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:54.171 test case1: single bdev can't be used in multiple subsystems 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 [2024-10-16 07:16:53.351165] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:54.171 [2024-10-16 07:16:53.351193] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:54.171 [2024-10-16 07:16:53.351202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.171 request: 00:33:54.171 { 00:33:54.171 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:54.171 "namespace": { 00:33:54.171 "bdev_name": "Malloc0", 00:33:54.171 "no_auto_visible": false 00:33:54.171 }, 00:33:54.171 "method": "nvmf_subsystem_add_ns", 00:33:54.171 "req_id": 1 00:33:54.171 } 00:33:54.171 Got JSON-RPC error response 00:33:54.171 response: 00:33:54.171 { 00:33:54.171 "code": -32602, 00:33:54.171 "message": "Invalid parameters" 00:33:54.171 } 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:54.171 Adding namespace failed - expected result. 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:54.171 test case2: host connect to nvmf target in multiple paths 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 [2024-10-16 07:16:53.363330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.171 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:54.433 07:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:55.008 07:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:55.008 07:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:33:55.008 07:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:55.008 07:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:55.008 07:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:33:56.923 07:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:56.923 07:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:56.923 07:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:56.923 07:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:56.923 07:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:56.923 07:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:33:56.923 07:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:56.923 [global] 00:33:56.923 thread=1 00:33:56.923 invalidate=1 00:33:56.923 rw=write 00:33:56.924 time_based=1 00:33:56.924 runtime=1 00:33:56.924 ioengine=libaio 00:33:56.924 direct=1 00:33:56.924 bs=4096 00:33:56.924 iodepth=1 00:33:56.924 norandommap=0 00:33:56.924 numjobs=1 00:33:56.924 00:33:56.924 verify_dump=1 00:33:56.924 verify_backlog=512 00:33:56.924 verify_state_save=0 00:33:56.924 do_verify=1 00:33:56.924 verify=crc32c-intel 00:33:56.924 [job0] 00:33:56.924 filename=/dev/nvme0n1 00:33:56.924 Could not set queue depth (nvme0n1) 00:33:57.184 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:57.184 fio-3.35 00:33:57.184 Starting 1 thread 00:33:58.571 00:33:58.571 job0: (groupid=0, jobs=1): err= 0: pid=3390071: Wed Oct 16 07:16:57 2024 00:33:58.571 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:58.571 slat (nsec): min=7153, max=61810, avg=27430.58, stdev=3276.31 00:33:58.571 clat (usec): min=497, max=1186, avg=956.80, stdev=79.44 00:33:58.571 lat (usec): min=512, max=1213, avg=984.23, stdev=80.11 00:33:58.571 clat percentiles (usec): 00:33:58.571 | 1.00th=[ 668], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 922], 00:33:58.571 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:33:58.571 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1029], 95.00th=[ 1057], 00:33:58.571 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:33:58.571 | 99.99th=[ 1188] 00:33:58.571 write: IOPS=823, BW=3293KiB/s (3372kB/s)(3296KiB/1001msec); 0 zone resets 00:33:58.571 slat (nsec): min=9221, max=78634, avg=30578.20, stdev=11976.21 00:33:58.571 clat (usec): min=223, max=1732, avg=558.29, stdev=114.83 00:33:58.571 lat (usec): min=233, max=1769, avg=588.87, stdev=119.66 00:33:58.571 clat percentiles (usec): 00:33:58.571 | 1.00th=[ 322], 5.00th=[ 392], 10.00th=[ 429], 20.00th=[ 469], 00:33:58.571 | 30.00th=[ 502], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 586], 00:33:58.571 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 709], 00:33:58.571 | 99.00th=[ 775], 99.50th=[ 824], 99.90th=[ 1729], 99.95th=[ 1729], 00:33:58.571 | 99.99th=[ 1729] 00:33:58.571 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:58.571 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:58.571 lat (usec) : 250=0.15%, 500=17.96%, 750=43.34%, 1000=28.82% 00:33:58.571 lat (msec) : 2=9.73% 00:33:58.571 cpu : usr=2.40%, sys=5.50%, ctx=1340, majf=0, minf=1 00:33:58.571 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.571 issued rwts: total=512,824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.571 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:58.571 00:33:58.571 Run status group 0 (all jobs): 00:33:58.571 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:33:58.571 WRITE: bw=3293KiB/s (3372kB/s), 3293KiB/s-3293KiB/s (3372kB/s-3372kB/s), io=3296KiB (3375kB), run=1001-1001msec 00:33:58.571 00:33:58.571 Disk stats (read/write): 00:33:58.571 nvme0n1: ios=554/652, merge=0/0, ticks=829/292, in_queue=1121, util=97.09% 00:33:58.571 07:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:58.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:58.571 07:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:58.571 07:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:33:58.571 07:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:58.571 07:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:58.571 07:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:58.571 07:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:58.571 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:33:58.571 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:58.571 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:58.571 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:58.571 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:58.571 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:58.571 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:58.571 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:58.571 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:58.571 rmmod nvme_tcp 00:33:58.571 rmmod nvme_fabrics 00:33:58.571 rmmod nvme_keyring 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3389205 ']' 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3389205 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3389205 ']' 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3389205 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3389205 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3389205' 00:33:58.835 killing process with pid 3389205 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3389205 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3389205 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:58.835 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.836 07:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:01.384 00:34:01.384 real 0m15.181s 00:34:01.384 user 0m36.096s 00:34:01.384 sys 0m7.363s 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:01.384 ************************************ 00:34:01.384 END TEST nvmf_nmic 00:34:01.384 ************************************ 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:01.384 ************************************ 00:34:01.384 START TEST nvmf_fio_target 00:34:01.384 ************************************ 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:01.384 * Looking for test storage... 00:34:01.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:01.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.384 --rc genhtml_branch_coverage=1 00:34:01.384 --rc genhtml_function_coverage=1 00:34:01.384 --rc genhtml_legend=1 00:34:01.384 --rc geninfo_all_blocks=1 00:34:01.384 --rc geninfo_unexecuted_blocks=1 00:34:01.384 00:34:01.384 ' 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:01.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.384 --rc genhtml_branch_coverage=1 00:34:01.384 --rc genhtml_function_coverage=1 00:34:01.384 --rc genhtml_legend=1 00:34:01.384 --rc geninfo_all_blocks=1 00:34:01.384 --rc geninfo_unexecuted_blocks=1 00:34:01.384 00:34:01.384 ' 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:01.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.384 --rc genhtml_branch_coverage=1 00:34:01.384 --rc genhtml_function_coverage=1 00:34:01.384 --rc genhtml_legend=1 00:34:01.384 --rc geninfo_all_blocks=1 00:34:01.384 --rc geninfo_unexecuted_blocks=1 00:34:01.384 00:34:01.384 ' 00:34:01.384 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:01.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.384 --rc genhtml_branch_coverage=1 00:34:01.385 --rc genhtml_function_coverage=1 00:34:01.385 --rc genhtml_legend=1 00:34:01.385 --rc geninfo_all_blocks=1 00:34:01.385 --rc geninfo_unexecuted_blocks=1 00:34:01.385 00:34:01.385 ' 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:01.385 07:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.529 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:09.530 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:09.530 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:09.530 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:09.530 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.530 07:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.530 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.530 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:34:09.531 00:34:09.531 --- 10.0.0.2 ping statistics --- 00:34:09.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.531 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:34:09.531 00:34:09.531 --- 10.0.0.1 ping statistics --- 00:34:09.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.531 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3394413 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3394413 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3394413 ']' 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:09.531 07:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.531 [2024-10-16 07:17:08.206716] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:09.531 [2024-10-16 07:17:08.207840] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:34:09.531 [2024-10-16 07:17:08.207915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:09.531 [2024-10-16 07:17:08.295234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:09.531 [2024-10-16 07:17:08.348361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.531 [2024-10-16 07:17:08.348414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.531 [2024-10-16 07:17:08.348422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.531 [2024-10-16 07:17:08.348429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.531 [2024-10-16 07:17:08.348435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.531 [2024-10-16 07:17:08.350484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.531 [2024-10-16 07:17:08.350649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:09.531 [2024-10-16 07:17:08.350809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.531 [2024-10-16 07:17:08.350810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:09.531 [2024-10-16 07:17:08.427795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:09.531 [2024-10-16 07:17:08.428059] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:09.531 [2024-10-16 07:17:08.428874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:09.531 [2024-10-16 07:17:08.429603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:09.531 [2024-10-16 07:17:08.429619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:09.531 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:09.531 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:34:09.531 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:09.531 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:09.531 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.793 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.793 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:09.793 [2024-10-16 07:17:09.223691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.793 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:10.054 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:10.054 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:10.316 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:10.316 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:10.577 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:10.577 07:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:10.839 07:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:10.839 07:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:10.839 07:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:11.100 07:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:11.100 07:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:11.361 07:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:11.361 07:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:11.623 07:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:11.623 07:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:11.623 07:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:11.885 07:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:11.885 07:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.146 07:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:12.146 07:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:12.146 07:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.408 [2024-10-16 07:17:11.791669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.408 07:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:12.670 07:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:12.931 07:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:13.192 07:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:13.192 07:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:34:13.192 07:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:13.192 07:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:34:13.192 07:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:34:13.192 07:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:34:15.244 07:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:15.244 07:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:15.244 07:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:15.244 07:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:34:15.244 07:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:15.244 07:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:34:15.244 07:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:15.244 [global] 00:34:15.244 thread=1 00:34:15.244 invalidate=1 00:34:15.244 rw=write 00:34:15.244 time_based=1 00:34:15.244 runtime=1 00:34:15.244 ioengine=libaio 00:34:15.244 direct=1 00:34:15.244 bs=4096 00:34:15.244 iodepth=1 00:34:15.244 norandommap=0 00:34:15.244 numjobs=1 00:34:15.244 00:34:15.244 verify_dump=1 00:34:15.244 verify_backlog=512 00:34:15.244 verify_state_save=0 00:34:15.244 do_verify=1 00:34:15.244 verify=crc32c-intel 00:34:15.244 [job0] 00:34:15.244 filename=/dev/nvme0n1 00:34:15.244 [job1] 00:34:15.244 filename=/dev/nvme0n2 00:34:15.244 [job2] 00:34:15.244 filename=/dev/nvme0n3 00:34:15.244 [job3] 00:34:15.244 filename=/dev/nvme0n4 00:34:15.515 Could not set queue depth (nvme0n1) 00:34:15.515 Could not set queue depth (nvme0n2) 00:34:15.515 Could not set queue depth (nvme0n3) 00:34:15.515 Could not set queue depth (nvme0n4) 00:34:15.776 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:15.776 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:15.776 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:15.776 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:15.776 fio-3.35 00:34:15.776 Starting 4 threads 00:34:17.191 00:34:17.191 job0: (groupid=0, jobs=1): err= 0: pid=3395994: Wed Oct 16 07:17:16 2024 00:34:17.191 read: IOPS=529, BW=2118KiB/s (2169kB/s)(2120KiB/1001msec) 00:34:17.191 slat (nsec): min=7056, max=58447, avg=24148.64, stdev=6070.06 00:34:17.191 clat (usec): min=314, max=1345, avg=895.45, stdev=170.21 00:34:17.191 lat (usec): min=340, max=1372, avg=919.60, stdev=170.90 00:34:17.191 clat percentiles (usec): 00:34:17.191 | 1.00th=[ 545], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 750], 00:34:17.191 | 30.00th=[ 775], 40.00th=[ 807], 50.00th=[ 865], 60.00th=[ 955], 00:34:17.191 | 70.00th=[ 1029], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:34:17.191 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1352], 99.95th=[ 1352], 00:34:17.191 | 99.99th=[ 1352] 00:34:17.191 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:17.191 slat (nsec): min=9577, max=73430, avg=27823.12, stdev=9936.26 00:34:17.191 clat (usec): min=202, max=893, avg=462.82, stdev=108.48 00:34:17.191 lat (usec): min=220, max=924, avg=490.64, stdev=111.91 00:34:17.191 clat percentiles (usec): 00:34:17.191 | 1.00th=[ 253], 5.00th=[ 310], 10.00th=[ 338], 20.00th=[ 367], 00:34:17.191 | 30.00th=[ 412], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 474], 00:34:17.191 | 70.00th=[ 494], 80.00th=[ 537], 90.00th=[ 603], 95.00th=[ 668], 00:34:17.191 | 99.00th=[ 807], 99.50th=[ 865], 99.90th=[ 881], 99.95th=[ 898], 00:34:17.191 | 99.99th=[ 898] 00:34:17.191 bw ( KiB/s): min= 4096, max= 4096, per=34.03%, avg=4096.00, stdev= 0.00, samples=1 00:34:17.191 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:17.191 lat (usec) : 250=0.58%, 500=47.10%, 750=24.32%, 1000=15.96% 00:34:17.191 lat (msec) : 2=12.03% 00:34:17.191 cpu : usr=2.40%, sys=4.10%, ctx=1555, majf=0, minf=1 00:34:17.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.191 issued rwts: total=530,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:17.191 job1: (groupid=0, jobs=1): err= 0: pid=3395995: Wed Oct 16 07:17:16 2024 00:34:17.191 read: IOPS=18, BW=74.4KiB/s (76.2kB/s)(76.0KiB/1021msec) 00:34:17.191 slat (nsec): min=26587, max=27337, avg=26988.68, stdev=196.49 00:34:17.191 clat (usec): min=40825, max=41977, avg=41024.27, stdev=242.30 00:34:17.191 lat (usec): min=40852, max=42004, avg=41051.26, stdev=242.35 00:34:17.191 clat percentiles (usec): 00:34:17.191 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:17.191 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:17.191 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:34:17.191 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:17.191 | 99.99th=[42206] 00:34:17.191 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:34:17.191 slat (nsec): min=10076, max=63985, avg=31206.04, stdev=9615.37 00:34:17.191 clat (usec): min=150, max=719, avg=427.83, stdev=99.96 00:34:17.191 lat (usec): min=161, max=754, avg=459.03, stdev=102.75 00:34:17.191 clat percentiles (usec): 00:34:17.191 | 1.00th=[ 223], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 338], 00:34:17.191 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 429], 60.00th=[ 461], 00:34:17.191 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 553], 95.00th=[ 603], 00:34:17.191 | 99.00th=[ 644], 99.50th=[ 701], 99.90th=[ 717], 99.95th=[ 717], 00:34:17.191 | 99.99th=[ 717] 00:34:17.191 bw ( KiB/s): min= 4096, max= 4096, per=34.03%, avg=4096.00, stdev= 0.00, samples=1 00:34:17.191 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:17.191 lat (usec) : 250=3.01%, 500=68.93%, 750=24.48% 00:34:17.191 lat (msec) : 50=3.58% 00:34:17.191 cpu : usr=0.20%, sys=2.06%, ctx=532, majf=0, minf=1 00:34:17.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.191 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:17.191 job2: (groupid=0, jobs=1): err= 0: pid=3395998: Wed Oct 16 07:17:16 2024 00:34:17.191 read: IOPS=16, BW=67.0KiB/s (68.6kB/s)(68.0KiB/1015msec) 00:34:17.191 slat (nsec): min=27507, max=31237, avg=28579.82, stdev=753.21 00:34:17.191 clat (usec): min=1208, max=42108, avg=39538.97, stdev=9878.50 00:34:17.191 lat (usec): min=1237, max=42136, avg=39567.55, stdev=9878.47 00:34:17.191 clat percentiles (usec): 00:34:17.191 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41681], 20.00th=[41681], 00:34:17.191 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:17.191 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:17.191 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:17.191 | 99.99th=[42206] 00:34:17.191 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:34:17.191 slat (nsec): min=10486, max=68614, avg=36550.79, stdev=7265.45 00:34:17.191 clat (usec): min=240, max=1003, avg=620.37, stdev=151.77 00:34:17.191 lat (usec): min=257, max=1041, avg=656.92, stdev=153.31 00:34:17.191 clat percentiles (usec): 00:34:17.191 | 1.00th=[ 277], 5.00th=[ 343], 10.00th=[ 424], 20.00th=[ 494], 00:34:17.191 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 635], 60.00th=[ 668], 00:34:17.191 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 848], 00:34:17.191 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[ 1004], 99.95th=[ 1004], 00:34:17.191 | 99.99th=[ 1004] 00:34:17.191 bw ( KiB/s): min= 4096, max= 4096, per=34.03%, avg=4096.00, stdev= 0.00, samples=1 00:34:17.191 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:17.191 lat (usec) : 250=0.57%, 500=19.66%, 750=55.77%, 1000=20.60% 00:34:17.191 lat (msec) : 2=0.38%, 50=3.02% 00:34:17.191 cpu : usr=0.59%, sys=2.96%, ctx=531, majf=0, minf=1 00:34:17.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.192 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:17.192 job3: (groupid=0, jobs=1): err= 0: pid=3395999: Wed Oct 16 07:17:16 2024 00:34:17.192 read: IOPS=640, BW=2561KiB/s (2623kB/s)(2564KiB/1001msec) 00:34:17.192 slat (nsec): min=7096, max=60407, avg=22817.20, stdev=8101.01 00:34:17.192 clat (usec): min=546, max=970, avg=802.32, stdev=66.49 00:34:17.192 lat (usec): min=554, max=996, avg=825.13, stdev=68.55 00:34:17.192 clat percentiles (usec): 00:34:17.192 | 1.00th=[ 627], 5.00th=[ 685], 10.00th=[ 701], 20.00th=[ 750], 00:34:17.192 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 824], 00:34:17.192 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 898], 00:34:17.192 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 971], 99.95th=[ 971], 00:34:17.192 | 99.99th=[ 971] 00:34:17.192 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:17.192 slat (nsec): min=9818, max=51810, avg=27174.99, stdev=10355.19 00:34:17.192 clat (usec): min=126, max=637, avg=422.33, stdev=67.32 00:34:17.192 lat (usec): min=138, max=670, avg=449.50, stdev=71.91 00:34:17.192 clat percentiles (usec): 00:34:17.192 | 1.00th=[ 251], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 355], 00:34:17.192 | 30.00th=[ 388], 40.00th=[ 424], 50.00th=[ 437], 60.00th=[ 449], 00:34:17.192 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 494], 95.00th=[ 515], 00:34:17.192 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 635], 99.95th=[ 635], 00:34:17.192 | 99.99th=[ 635] 00:34:17.192 bw ( KiB/s): min= 4096, max= 4096, per=34.03%, avg=4096.00, stdev= 0.00, samples=1 00:34:17.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:17.192 lat (usec) : 250=0.60%, 500=55.50%, 750=13.15%, 1000=30.75% 00:34:17.192 cpu : usr=2.70%, sys=3.90%, ctx=1665, majf=0, minf=1 00:34:17.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.192 issued rwts: total=641,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:17.192 00:34:17.192 Run status group 0 (all jobs): 00:34:17.192 READ: bw=4729KiB/s (4842kB/s), 67.0KiB/s-2561KiB/s (68.6kB/s-2623kB/s), io=4828KiB (4944kB), run=1001-1021msec 00:34:17.192 WRITE: bw=11.8MiB/s (12.3MB/s), 2006KiB/s-4092KiB/s (2054kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1021msec 00:34:17.192 00:34:17.192 Disk stats (read/write): 00:34:17.192 nvme0n1: ios=562/731, merge=0/0, ticks=501/318, in_queue=819, util=87.47% 00:34:17.192 nvme0n2: ios=37/512, merge=0/0, ticks=1535/218, in_queue=1753, util=96.63% 00:34:17.192 nvme0n3: ios=34/512, merge=0/0, ticks=1383/247, in_queue=1630, util=96.51% 00:34:17.192 nvme0n4: ios=512/892, merge=0/0, ticks=411/358, in_queue=769, util=89.50% 00:34:17.192 07:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:17.192 [global] 00:34:17.192 thread=1 00:34:17.192 invalidate=1 00:34:17.192 rw=randwrite 00:34:17.192 time_based=1 00:34:17.192 runtime=1 00:34:17.192 ioengine=libaio 00:34:17.192 direct=1 00:34:17.192 bs=4096 00:34:17.192 iodepth=1 00:34:17.192 norandommap=0 00:34:17.192 numjobs=1 00:34:17.192 00:34:17.192 verify_dump=1 00:34:17.192 verify_backlog=512 00:34:17.192 verify_state_save=0 00:34:17.192 do_verify=1 00:34:17.192 verify=crc32c-intel 00:34:17.192 [job0] 00:34:17.192 filename=/dev/nvme0n1 00:34:17.192 [job1] 00:34:17.192 filename=/dev/nvme0n2 00:34:17.192 [job2] 00:34:17.192 filename=/dev/nvme0n3 00:34:17.192 [job3] 00:34:17.192 filename=/dev/nvme0n4 00:34:17.192 Could not set queue depth (nvme0n1) 00:34:17.192 Could not set queue depth (nvme0n2) 00:34:17.192 Could not set queue depth (nvme0n3) 00:34:17.192 Could not set queue depth (nvme0n4) 00:34:17.455 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.455 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.455 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.455 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:17.455 fio-3.35 00:34:17.455 Starting 4 threads 00:34:18.942 00:34:18.942 job0: (groupid=0, jobs=1): err= 0: pid=3396518: Wed Oct 16 07:17:18 2024 00:34:18.942 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:18.942 slat (nsec): min=24415, max=55004, avg=25542.54, stdev=2544.22 00:34:18.942 clat (usec): min=859, max=1606, avg=1200.81, stdev=116.44 00:34:18.942 lat (usec): min=885, max=1631, avg=1226.35, stdev=116.26 00:34:18.942 clat percentiles (usec): 00:34:18.942 | 1.00th=[ 922], 5.00th=[ 996], 10.00th=[ 1057], 20.00th=[ 1106], 00:34:18.942 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1237], 00:34:18.942 | 70.00th=[ 1270], 80.00th=[ 1303], 90.00th=[ 1352], 95.00th=[ 1385], 00:34:18.942 | 99.00th=[ 1450], 99.50th=[ 1483], 99.90th=[ 1614], 99.95th=[ 1614], 00:34:18.942 | 99.99th=[ 1614] 00:34:18.942 write: IOPS=523, BW=2094KiB/s (2144kB/s)(2096KiB/1001msec); 0 zone resets 00:34:18.942 slat (nsec): min=9452, max=67029, avg=28355.65, stdev=8658.88 00:34:18.942 clat (usec): min=301, max=1052, avg=666.57, stdev=131.88 00:34:18.942 lat (usec): min=310, max=1083, avg=694.93, stdev=135.16 00:34:18.942 clat percentiles (usec): 00:34:18.942 | 1.00th=[ 347], 5.00th=[ 424], 10.00th=[ 486], 20.00th=[ 553], 00:34:18.942 | 30.00th=[ 611], 40.00th=[ 644], 50.00th=[ 676], 60.00th=[ 709], 00:34:18.942 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 824], 95.00th=[ 889], 00:34:18.942 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:18.942 | 99.99th=[ 1057] 00:34:18.942 bw ( KiB/s): min= 4096, max= 4096, per=37.99%, avg=4096.00, stdev= 0.00, samples=1 00:34:18.942 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:18.942 lat (usec) : 500=5.89%, 750=31.66%, 1000=15.54% 00:34:18.942 lat (msec) : 2=46.91% 00:34:18.942 cpu : usr=1.90%, sys=2.60%, ctx=1037, majf=0, minf=1 00:34:18.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.942 issued rwts: total=512,524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:18.942 job1: (groupid=0, jobs=1): err= 0: pid=3396519: Wed Oct 16 07:17:18 2024 00:34:18.942 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:18.942 slat (nsec): min=6757, max=44697, avg=24828.09, stdev=3287.94 00:34:18.942 clat (usec): min=417, max=1339, avg=949.41, stdev=173.20 00:34:18.942 lat (usec): min=442, max=1364, avg=974.24, stdev=173.50 00:34:18.942 clat percentiles (usec): 00:34:18.942 | 1.00th=[ 553], 5.00th=[ 635], 10.00th=[ 717], 20.00th=[ 799], 00:34:18.942 | 30.00th=[ 848], 40.00th=[ 906], 50.00th=[ 963], 60.00th=[ 1012], 00:34:18.942 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1205], 00:34:18.942 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:34:18.942 | 99.99th=[ 1336] 00:34:18.942 write: IOPS=735, BW=2941KiB/s (3012kB/s)(2944KiB/1001msec); 0 zone resets 00:34:18.942 slat (nsec): min=9309, max=51866, avg=28427.37, stdev=8242.05 00:34:18.942 clat (usec): min=178, max=1641, avg=639.18, stdev=144.76 00:34:18.942 lat (usec): min=188, max=1675, avg=667.61, stdev=146.98 00:34:18.942 clat percentiles (usec): 00:34:18.942 | 1.00th=[ 277], 5.00th=[ 396], 10.00th=[ 465], 20.00th=[ 523], 00:34:18.942 | 30.00th=[ 570], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 676], 00:34:18.942 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 840], 00:34:18.942 | 99.00th=[ 955], 99.50th=[ 1287], 99.90th=[ 1647], 99.95th=[ 1647], 00:34:18.942 | 99.99th=[ 1647] 00:34:18.942 bw ( KiB/s): min= 4096, max= 4096, per=37.99%, avg=4096.00, stdev= 0.00, samples=1 00:34:18.942 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:18.942 lat (usec) : 250=0.32%, 500=8.33%, 750=44.63%, 1000=28.69% 00:34:18.942 lat (msec) : 2=18.03% 00:34:18.942 cpu : usr=1.70%, sys=3.70%, ctx=1248, majf=0, minf=1 00:34:18.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.942 issued rwts: total=512,736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:18.942 job2: (groupid=0, jobs=1): err= 0: pid=3396521: Wed Oct 16 07:17:18 2024 00:34:18.942 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:18.942 slat (nsec): min=7071, max=44295, avg=24572.42, stdev=4991.51 00:34:18.942 clat (usec): min=209, max=1095, avg=801.56, stdev=126.43 00:34:18.942 lat (usec): min=235, max=1120, avg=826.14, stdev=126.92 00:34:18.942 clat percentiles (usec): 00:34:18.942 | 1.00th=[ 457], 5.00th=[ 578], 10.00th=[ 644], 20.00th=[ 701], 00:34:18.942 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 824], 60.00th=[ 865], 00:34:18.942 | 70.00th=[ 889], 80.00th=[ 906], 90.00th=[ 947], 95.00th=[ 971], 00:34:18.942 | 99.00th=[ 1029], 99.50th=[ 1037], 99.90th=[ 1090], 99.95th=[ 1090], 00:34:18.942 | 99.99th=[ 1090] 00:34:18.942 write: IOPS=976, BW=3904KiB/s (3998kB/s)(3908KiB/1001msec); 0 zone resets 00:34:18.942 slat (nsec): min=9446, max=52443, avg=30939.96, stdev=6419.91 00:34:18.942 clat (usec): min=144, max=890, avg=547.49, stdev=124.93 00:34:18.942 lat (usec): min=175, max=921, avg=578.43, stdev=126.26 00:34:18.942 clat percentiles (usec): 00:34:18.942 | 1.00th=[ 251], 5.00th=[ 330], 10.00th=[ 388], 20.00th=[ 437], 00:34:18.942 | 30.00th=[ 486], 40.00th=[ 519], 50.00th=[ 553], 60.00th=[ 586], 00:34:18.942 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 742], 00:34:18.942 | 99.00th=[ 799], 99.50th=[ 832], 99.90th=[ 889], 99.95th=[ 889], 00:34:18.942 | 99.99th=[ 889] 00:34:18.942 bw ( KiB/s): min= 4096, max= 4096, per=37.99%, avg=4096.00, stdev= 0.00, samples=1 00:34:18.942 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:18.942 lat (usec) : 250=0.60%, 500=23.44%, 750=50.24%, 1000=24.78% 00:34:18.942 lat (msec) : 2=0.94% 00:34:18.942 cpu : usr=2.10%, sys=4.60%, ctx=1489, majf=0, minf=1 00:34:18.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.942 issued rwts: total=512,977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:18.942 job3: (groupid=0, jobs=1): err= 0: pid=3396522: Wed Oct 16 07:17:18 2024 00:34:18.942 read: IOPS=18, BW=74.5KiB/s (76.3kB/s)(76.0KiB/1020msec) 00:34:18.942 slat (nsec): min=26106, max=27090, avg=26571.53, stdev=322.51 00:34:18.942 clat (usec): min=761, max=42079, avg=39585.42, stdev=9410.35 00:34:18.942 lat (usec): min=788, max=42105, avg=39611.99, stdev=9410.37 00:34:18.942 clat percentiles (usec): 00:34:18.942 | 1.00th=[ 758], 5.00th=[ 758], 10.00th=[41157], 20.00th=[41157], 00:34:18.942 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:18.942 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:18.942 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:18.942 | 99.99th=[42206] 00:34:18.942 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:34:18.942 slat (nsec): min=8948, max=66858, avg=30350.89, stdev=8106.30 00:34:18.942 clat (usec): min=118, max=848, avg=483.45, stdev=125.36 00:34:18.942 lat (usec): min=127, max=880, avg=513.80, stdev=128.10 00:34:18.942 clat percentiles (usec): 00:34:18.942 | 1.00th=[ 169], 5.00th=[ 273], 10.00th=[ 306], 20.00th=[ 388], 00:34:18.942 | 30.00th=[ 420], 40.00th=[ 461], 50.00th=[ 490], 60.00th=[ 515], 00:34:18.942 | 70.00th=[ 553], 80.00th=[ 594], 90.00th=[ 635], 95.00th=[ 676], 00:34:18.942 | 99.00th=[ 742], 99.50th=[ 791], 99.90th=[ 848], 99.95th=[ 848], 00:34:18.942 | 99.99th=[ 848] 00:34:18.942 bw ( KiB/s): min= 4096, max= 4096, per=37.99%, avg=4096.00, stdev= 0.00, samples=1 00:34:18.942 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:18.942 lat (usec) : 250=3.20%, 500=48.59%, 750=43.88%, 1000=0.94% 00:34:18.942 lat (msec) : 50=3.39% 00:34:18.942 cpu : usr=1.28%, sys=1.86%, ctx=531, majf=0, minf=1 00:34:18.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.942 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:18.942 00:34:18.942 Run status group 0 (all jobs): 00:34:18.943 READ: bw=6098KiB/s (6244kB/s), 74.5KiB/s-2046KiB/s (76.3kB/s-2095kB/s), io=6220KiB (6369kB), run=1001-1020msec 00:34:18.943 WRITE: bw=10.5MiB/s (11.0MB/s), 2008KiB/s-3904KiB/s (2056kB/s-3998kB/s), io=10.7MiB (11.3MB), run=1001-1020msec 00:34:18.943 00:34:18.943 Disk stats (read/write): 00:34:18.943 nvme0n1: ios=434/512, merge=0/0, ticks=519/324, in_queue=843, util=88.08% 00:34:18.943 nvme0n2: ios=535/512, merge=0/0, ticks=513/318, in_queue=831, util=87.86% 00:34:18.943 nvme0n3: ios=512/696, merge=0/0, ticks=405/350, in_queue=755, util=88.38% 00:34:18.943 nvme0n4: ios=14/512, merge=0/0, ticks=544/195, in_queue=739, util=89.52% 00:34:18.943 07:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:18.943 [global] 00:34:18.943 thread=1 00:34:18.943 invalidate=1 00:34:18.943 rw=write 00:34:18.943 time_based=1 00:34:18.943 runtime=1 00:34:18.943 ioengine=libaio 00:34:18.943 direct=1 00:34:18.943 bs=4096 00:34:18.943 iodepth=128 00:34:18.943 norandommap=0 00:34:18.943 numjobs=1 00:34:18.943 00:34:18.943 verify_dump=1 00:34:18.943 verify_backlog=512 00:34:18.943 verify_state_save=0 00:34:18.943 do_verify=1 00:34:18.943 verify=crc32c-intel 00:34:18.943 [job0] 00:34:18.943 filename=/dev/nvme0n1 00:34:18.943 [job1] 00:34:18.943 filename=/dev/nvme0n2 00:34:18.943 [job2] 00:34:18.943 filename=/dev/nvme0n3 00:34:18.943 [job3] 00:34:18.943 filename=/dev/nvme0n4 00:34:18.943 Could not set queue depth (nvme0n1) 00:34:18.943 Could not set queue depth (nvme0n2) 00:34:18.943 Could not set queue depth (nvme0n3) 00:34:18.943 Could not set queue depth (nvme0n4) 00:34:19.203 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:19.203 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:19.203 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:19.203 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:19.203 fio-3.35 00:34:19.203 Starting 4 threads 00:34:20.586 00:34:20.586 job0: (groupid=0, jobs=1): err= 0: pid=3397040: Wed Oct 16 07:17:19 2024 00:34:20.586 read: IOPS=7469, BW=29.2MiB/s (30.6MB/s)(29.4MiB/1007msec) 00:34:20.586 slat (nsec): min=893, max=8637.9k, avg=60683.31, stdev=446212.22 00:34:20.586 clat (usec): min=1101, max=20367, avg=8250.04, stdev=2497.61 00:34:20.586 lat (usec): min=1125, max=20371, avg=8310.72, stdev=2522.87 00:34:20.586 clat percentiles (usec): 00:34:20.586 | 1.00th=[ 2835], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6587], 00:34:20.586 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8160], 00:34:20.586 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[11600], 95.00th=[12780], 00:34:20.586 | 99.00th=[16909], 99.50th=[18482], 99.90th=[20317], 99.95th=[20317], 00:34:20.586 | 99.99th=[20317] 00:34:20.586 write: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec); 0 zone resets 00:34:20.587 slat (nsec): min=1543, max=10023k, avg=62923.02, stdev=429235.44 00:34:20.587 clat (usec): min=595, max=53602, avg=8553.33, stdev=6120.37 00:34:20.587 lat (usec): min=858, max=53612, avg=8616.26, stdev=6158.08 00:34:20.587 clat percentiles (usec): 00:34:20.587 | 1.00th=[ 2671], 5.00th=[ 4047], 10.00th=[ 4948], 20.00th=[ 5932], 00:34:20.587 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7373], 60.00th=[ 7767], 00:34:20.587 | 70.00th=[ 8225], 80.00th=[ 9372], 90.00th=[11731], 95.00th=[14615], 00:34:20.587 | 99.00th=[47973], 99.50th=[49546], 99.90th=[52691], 99.95th=[53740], 00:34:20.587 | 99.99th=[53740] 00:34:20.587 bw ( KiB/s): min=28720, max=32720, per=30.40%, avg=30720.00, stdev=2828.43, samples=2 00:34:20.587 iops : min= 7180, max= 8180, avg=7680.00, stdev=707.11, samples=2 00:34:20.587 lat (usec) : 750=0.01%, 1000=0.02% 00:34:20.587 lat (msec) : 2=0.56%, 4=2.77%, 10=79.03%, 20=15.91%, 50=1.53% 00:34:20.587 lat (msec) : 100=0.17% 00:34:20.587 cpu : usr=6.56%, sys=6.36%, ctx=475, majf=0, minf=1 00:34:20.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:20.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:20.587 issued rwts: total=7522,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:20.587 job1: (groupid=0, jobs=1): err= 0: pid=3397041: Wed Oct 16 07:17:19 2024 00:34:20.587 read: IOPS=5544, BW=21.7MiB/s (22.7MB/s)(21.8MiB/1007msec) 00:34:20.587 slat (nsec): min=900, max=13697k, avg=97312.79, stdev=645564.24 00:34:20.587 clat (usec): min=1163, max=46552, avg=12418.91, stdev=7621.33 00:34:20.587 lat (usec): min=4120, max=46576, avg=12516.22, stdev=7686.51 00:34:20.587 clat percentiles (usec): 00:34:20.587 | 1.00th=[ 4817], 5.00th=[ 5866], 10.00th=[ 6915], 20.00th=[ 7963], 00:34:20.587 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11600], 00:34:20.587 | 70.00th=[12125], 80.00th=[13566], 90.00th=[21890], 95.00th=[32900], 00:34:20.587 | 99.00th=[42206], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:34:20.587 | 99.99th=[46400] 00:34:20.587 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:34:20.587 slat (nsec): min=1606, max=12286k, avg=69963.35, stdev=470985.44 00:34:20.587 clat (usec): min=451, max=39861, avg=10306.58, stdev=5308.21 00:34:20.587 lat (usec): min=463, max=39869, avg=10376.54, stdev=5337.61 00:34:20.587 clat percentiles (usec): 00:34:20.587 | 1.00th=[ 725], 5.00th=[ 4490], 10.00th=[ 5473], 20.00th=[ 7308], 00:34:20.587 | 30.00th=[ 7767], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9372], 00:34:20.587 | 70.00th=[10421], 80.00th=[13566], 90.00th=[17695], 95.00th=[22152], 00:34:20.587 | 99.00th=[28705], 99.50th=[28967], 99.90th=[34341], 99.95th=[34341], 00:34:20.587 | 99.99th=[40109] 00:34:20.587 bw ( KiB/s): min=20480, max=24576, per=22.30%, avg=22528.00, stdev=2896.31, samples=2 00:34:20.587 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:34:20.587 lat (usec) : 500=0.05%, 750=0.56%, 1000=0.47% 00:34:20.587 lat (msec) : 2=0.25%, 4=0.89%, 10=55.69%, 20=33.40%, 50=8.68% 00:34:20.587 cpu : usr=3.88%, sys=4.97%, ctx=519, majf=0, minf=1 00:34:20.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:20.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:20.587 issued rwts: total=5583,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:20.587 job2: (groupid=0, jobs=1): err= 0: pid=3397043: Wed Oct 16 07:17:19 2024 00:34:20.587 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:34:20.587 slat (nsec): min=977, max=9330.4k, avg=79012.79, stdev=462306.06 00:34:20.587 clat (usec): min=4299, max=38012, avg=9755.27, stdev=2639.73 00:34:20.587 lat (usec): min=4302, max=38020, avg=9834.28, stdev=2682.36 00:34:20.587 clat percentiles (usec): 00:34:20.587 | 1.00th=[ 5211], 5.00th=[ 6915], 10.00th=[ 7504], 20.00th=[ 8160], 00:34:20.587 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9896], 00:34:20.587 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11600], 95.00th=[13435], 00:34:20.587 | 99.00th=[21627], 99.50th=[22676], 99.90th=[31589], 99.95th=[38011], 00:34:20.587 | 99.99th=[38011] 00:34:20.587 write: IOPS=6358, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1003msec); 0 zone resets 00:34:20.587 slat (nsec): min=1631, max=21300k, avg=75803.14, stdev=531055.12 00:34:20.587 clat (usec): min=775, max=44635, avg=10522.14, stdev=4900.73 00:34:20.587 lat (usec): min=1294, max=44644, avg=10597.94, stdev=4925.60 00:34:20.587 clat percentiles (usec): 00:34:20.587 | 1.00th=[ 4490], 5.00th=[ 6128], 10.00th=[ 7177], 20.00th=[ 8029], 00:34:20.587 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:34:20.587 | 70.00th=[10290], 80.00th=[11207], 90.00th=[13566], 95.00th=[22676], 00:34:20.587 | 99.00th=[35390], 99.50th=[35914], 99.90th=[43254], 99.95th=[44827], 00:34:20.587 | 99.99th=[44827] 00:34:20.587 bw ( KiB/s): min=24576, max=25424, per=24.74%, avg=25000.00, stdev=599.63, samples=2 00:34:20.587 iops : min= 6144, max= 6356, avg=6250.00, stdev=149.91, samples=2 00:34:20.587 lat (usec) : 1000=0.01% 00:34:20.587 lat (msec) : 2=0.02%, 4=0.39%, 10=63.51%, 20=32.14%, 50=3.94% 00:34:20.587 cpu : usr=3.39%, sys=6.89%, ctx=572, majf=0, minf=1 00:34:20.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:20.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:20.587 issued rwts: total=6144,6378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:20.587 job3: (groupid=0, jobs=1): err= 0: pid=3397044: Wed Oct 16 07:17:19 2024 00:34:20.587 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:34:20.587 slat (nsec): min=977, max=10605k, avg=90029.43, stdev=543363.83 00:34:20.587 clat (usec): min=2467, max=49859, avg=11116.73, stdev=4725.17 00:34:20.587 lat (usec): min=2474, max=49867, avg=11206.76, stdev=4779.45 00:34:20.587 clat percentiles (usec): 00:34:20.587 | 1.00th=[ 5211], 5.00th=[ 6194], 10.00th=[ 6915], 20.00th=[ 7701], 00:34:20.587 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10290], 00:34:20.587 | 70.00th=[12125], 80.00th=[14353], 90.00th=[17957], 95.00th=[19268], 00:34:20.587 | 99.00th=[24511], 99.50th=[39060], 99.90th=[46400], 99.95th=[50070], 00:34:20.587 | 99.99th=[50070] 00:34:20.587 write: IOPS=5728, BW=22.4MiB/s (23.5MB/s)(22.4MiB/1003msec); 0 zone resets 00:34:20.587 slat (nsec): min=1648, max=12195k, avg=76450.03, stdev=456365.73 00:34:20.587 clat (usec): min=452, max=49856, avg=11214.60, stdev=7305.17 00:34:20.587 lat (usec): min=465, max=49866, avg=11291.05, stdev=7334.56 00:34:20.587 clat percentiles (usec): 00:34:20.587 | 1.00th=[ 1434], 5.00th=[ 3392], 10.00th=[ 5211], 20.00th=[ 7177], 00:34:20.587 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 9896], 00:34:20.587 | 70.00th=[10814], 80.00th=[14484], 90.00th=[20579], 95.00th=[26346], 00:34:20.587 | 99.00th=[43254], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:34:20.587 | 99.99th=[50070] 00:34:20.587 bw ( KiB/s): min=20480, max=24576, per=22.30%, avg=22528.00, stdev=2896.31, samples=2 00:34:20.587 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:34:20.587 lat (usec) : 500=0.05%, 750=0.09%, 1000=0.14% 00:34:20.587 lat (msec) : 2=0.59%, 4=3.13%, 10=55.09%, 20=33.72%, 50=7.19% 00:34:20.587 cpu : usr=3.79%, sys=5.89%, ctx=696, majf=0, minf=1 00:34:20.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:20.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:20.587 issued rwts: total=5632,5746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:20.587 00:34:20.587 Run status group 0 (all jobs): 00:34:20.587 READ: bw=96.5MiB/s (101MB/s), 21.7MiB/s-29.2MiB/s (22.7MB/s-30.6MB/s), io=97.2MiB (102MB), run=1003-1007msec 00:34:20.587 WRITE: bw=98.7MiB/s (103MB/s), 21.8MiB/s-29.8MiB/s (22.9MB/s-31.2MB/s), io=99.4MiB (104MB), run=1003-1007msec 00:34:20.587 00:34:20.587 Disk stats (read/write): 00:34:20.587 nvme0n1: ios=6194/6618, merge=0/0, ticks=41045/39374, in_queue=80419, util=90.68% 00:34:20.587 nvme0n2: ios=4628/4647, merge=0/0, ticks=27915/24847, in_queue=52762, util=96.64% 00:34:20.587 nvme0n3: ios=5177/5231, merge=0/0, ticks=21970/24888, in_queue=46858, util=96.73% 00:34:20.587 nvme0n4: ios=4530/4608, merge=0/0, ticks=33210/36496, in_queue=69706, util=98.18% 00:34:20.587 07:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:20.587 [global] 00:34:20.587 thread=1 00:34:20.587 invalidate=1 00:34:20.587 rw=randwrite 00:34:20.587 time_based=1 00:34:20.587 runtime=1 00:34:20.587 ioengine=libaio 00:34:20.587 direct=1 00:34:20.587 bs=4096 00:34:20.587 iodepth=128 00:34:20.587 norandommap=0 00:34:20.587 numjobs=1 00:34:20.587 00:34:20.587 verify_dump=1 00:34:20.587 verify_backlog=512 00:34:20.587 verify_state_save=0 00:34:20.587 do_verify=1 00:34:20.587 verify=crc32c-intel 00:34:20.587 [job0] 00:34:20.587 filename=/dev/nvme0n1 00:34:20.587 [job1] 00:34:20.587 filename=/dev/nvme0n2 00:34:20.587 [job2] 00:34:20.587 filename=/dev/nvme0n3 00:34:20.587 [job3] 00:34:20.587 filename=/dev/nvme0n4 00:34:20.587 Could not set queue depth (nvme0n1) 00:34:20.587 Could not set queue depth (nvme0n2) 00:34:20.587 Could not set queue depth (nvme0n3) 00:34:20.587 Could not set queue depth (nvme0n4) 00:34:20.849 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:20.849 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:20.849 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:20.849 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:20.849 fio-3.35 00:34:20.849 Starting 4 threads 00:34:22.235 00:34:22.235 job0: (groupid=0, jobs=1): err= 0: pid=3397567: Wed Oct 16 07:17:21 2024 00:34:22.235 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:34:22.235 slat (nsec): min=891, max=4595.7k, avg=61900.13, stdev=345637.96 00:34:22.235 clat (usec): min=3063, max=15207, avg=8042.54, stdev=1647.41 00:34:22.235 lat (usec): min=3068, max=15217, avg=8104.44, stdev=1668.65 00:34:22.235 clat percentiles (usec): 00:34:22.235 | 1.00th=[ 5145], 5.00th=[ 5932], 10.00th=[ 6259], 20.00th=[ 6783], 00:34:22.235 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8029], 00:34:22.235 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[10421], 95.00th=[11076], 00:34:22.235 | 99.00th=[13173], 99.50th=[14222], 99.90th=[14484], 99.95th=[14484], 00:34:22.235 | 99.99th=[15270] 00:34:22.235 write: IOPS=7765, BW=30.3MiB/s (31.8MB/s)(30.4MiB/1003msec); 0 zone resets 00:34:22.235 slat (nsec): min=1502, max=11455k, avg=63248.90, stdev=406728.85 00:34:22.235 clat (usec): min=2245, max=25951, avg=8378.70, stdev=2555.53 00:34:22.235 lat (usec): min=2248, max=25962, avg=8441.94, stdev=2577.26 00:34:22.235 clat percentiles (usec): 00:34:22.235 | 1.00th=[ 4293], 5.00th=[ 5800], 10.00th=[ 6587], 20.00th=[ 7177], 00:34:22.235 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8225], 00:34:22.235 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[10290], 95.00th=[12911], 00:34:22.235 | 99.00th=[22414], 99.50th=[23725], 99.90th=[25035], 99.95th=[25822], 00:34:22.235 | 99.99th=[25822] 00:34:22.235 bw ( KiB/s): min=27976, max=33397, per=30.69%, avg=30686.50, stdev=3833.23, samples=2 00:34:22.235 iops : min= 6994, max= 8349, avg=7671.50, stdev=958.13, samples=2 00:34:22.235 lat (msec) : 4=0.37%, 10=87.41%, 20=11.55%, 50=0.67% 00:34:22.235 cpu : usr=3.39%, sys=7.98%, ctx=585, majf=0, minf=1 00:34:22.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:22.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:22.235 issued rwts: total=7680,7789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.235 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:22.235 job1: (groupid=0, jobs=1): err= 0: pid=3397568: Wed Oct 16 07:17:21 2024 00:34:22.235 read: IOPS=5230, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1003msec) 00:34:22.235 slat (nsec): min=899, max=8074.9k, avg=96069.06, stdev=563138.95 00:34:22.235 clat (usec): min=1695, max=32823, avg=12490.35, stdev=5234.59 00:34:22.235 lat (usec): min=3114, max=32832, avg=12586.42, stdev=5283.13 00:34:22.235 clat percentiles (usec): 00:34:22.235 | 1.00th=[ 6063], 5.00th=[ 7373], 10.00th=[ 8094], 20.00th=[ 8586], 00:34:22.235 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11338], 00:34:22.235 | 70.00th=[13042], 80.00th=[16057], 90.00th=[21627], 95.00th=[24249], 00:34:22.235 | 99.00th=[25560], 99.50th=[28443], 99.90th=[29754], 99.95th=[32900], 00:34:22.235 | 99.99th=[32900] 00:34:22.235 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:34:22.235 slat (nsec): min=1502, max=7288.0k, avg=81832.71, stdev=462777.20 00:34:22.235 clat (usec): min=1311, max=32782, avg=10966.63, stdev=4677.86 00:34:22.235 lat (usec): min=1323, max=32787, avg=11048.47, stdev=4700.59 00:34:22.235 clat percentiles (usec): 00:34:22.235 | 1.00th=[ 4948], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7373], 00:34:22.235 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 9372], 60.00th=[10683], 00:34:22.235 | 70.00th=[12256], 80.00th=[13960], 90.00th=[17695], 95.00th=[21365], 00:34:22.235 | 99.00th=[26870], 99.50th=[27657], 99.90th=[30278], 99.95th=[30540], 00:34:22.235 | 99.99th=[32900] 00:34:22.235 bw ( KiB/s): min=20480, max=24568, per=22.53%, avg=22524.00, stdev=2890.65, samples=2 00:34:22.235 iops : min= 5120, max= 6142, avg=5631.00, stdev=722.66, samples=2 00:34:22.235 lat (msec) : 2=0.07%, 4=0.08%, 10=49.15%, 20=40.88%, 50=9.81% 00:34:22.235 cpu : usr=2.20%, sys=5.79%, ctx=536, majf=0, minf=1 00:34:22.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:22.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:22.235 issued rwts: total=5246,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.235 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:22.235 job2: (groupid=0, jobs=1): err= 0: pid=3397569: Wed Oct 16 07:17:21 2024 00:34:22.235 read: IOPS=4426, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1005msec) 00:34:22.235 slat (nsec): min=964, max=12524k, avg=115334.11, stdev=702601.65 00:34:22.235 clat (usec): min=2147, max=37779, avg=14464.30, stdev=5693.70 00:34:22.235 lat (usec): min=5599, max=37787, avg=14579.63, stdev=5735.56 00:34:22.235 clat percentiles (usec): 00:34:22.235 | 1.00th=[ 7373], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10552], 00:34:22.235 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[13435], 00:34:22.235 | 70.00th=[16057], 80.00th=[18220], 90.00th=[23200], 95.00th=[26870], 00:34:22.235 | 99.00th=[33162], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:34:22.235 | 99.99th=[38011] 00:34:22.235 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:34:22.235 slat (nsec): min=1585, max=12676k, avg=101141.15, stdev=690685.91 00:34:22.235 clat (usec): min=3625, max=39017, avg=13655.71, stdev=5258.96 00:34:22.235 lat (usec): min=5304, max=39026, avg=13756.86, stdev=5300.13 00:34:22.235 clat percentiles (usec): 00:34:22.235 | 1.00th=[ 8094], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10290], 00:34:22.235 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11863], 60.00th=[12518], 00:34:22.235 | 70.00th=[13304], 80.00th=[16450], 90.00th=[21627], 95.00th=[22938], 00:34:22.235 | 99.00th=[34341], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:34:22.235 | 99.99th=[39060] 00:34:22.235 bw ( KiB/s): min=16384, max=20480, per=18.43%, avg=18432.00, stdev=2896.31, samples=2 00:34:22.235 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:34:22.235 lat (msec) : 4=0.02%, 10=16.71%, 20=69.18%, 50=14.09% 00:34:22.236 cpu : usr=2.69%, sys=4.18%, ctx=336, majf=0, minf=1 00:34:22.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:22.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:22.236 issued rwts: total=4449,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:22.236 job3: (groupid=0, jobs=1): err= 0: pid=3397570: Wed Oct 16 07:17:21 2024 00:34:22.236 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:34:22.236 slat (nsec): min=979, max=8201.0k, avg=74783.25, stdev=585002.16 00:34:22.236 clat (usec): min=2704, max=22256, avg=9588.23, stdev=2437.21 00:34:22.236 lat (usec): min=2710, max=22258, avg=9663.02, stdev=2480.59 00:34:22.236 clat percentiles (usec): 00:34:22.236 | 1.00th=[ 5080], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 7898], 00:34:22.236 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:34:22.236 | 70.00th=[ 9896], 80.00th=[11600], 90.00th=[12911], 95.00th=[14615], 00:34:22.236 | 99.00th=[16909], 99.50th=[18744], 99.90th=[21890], 99.95th=[22152], 00:34:22.236 | 99.99th=[22152] 00:34:22.236 write: IOPS=7092, BW=27.7MiB/s (29.1MB/s)(27.9MiB/1007msec); 0 zone resets 00:34:22.236 slat (nsec): min=1601, max=8582.6k, avg=65566.29, stdev=422279.48 00:34:22.236 clat (usec): min=1276, max=22253, avg=8956.28, stdev=2869.43 00:34:22.236 lat (usec): min=1286, max=22255, avg=9021.84, stdev=2883.81 00:34:22.236 clat percentiles (usec): 00:34:22.236 | 1.00th=[ 3752], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 6652], 00:34:22.236 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 8979], 00:34:22.236 | 70.00th=[ 9372], 80.00th=[10945], 90.00th=[12518], 95.00th=[14877], 00:34:22.236 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:34:22.236 | 99.99th=[22152] 00:34:22.236 bw ( KiB/s): min=26632, max=29488, per=28.06%, avg=28060.00, stdev=2019.50, samples=2 00:34:22.236 iops : min= 6658, max= 7372, avg=7015.00, stdev=504.87, samples=2 00:34:22.236 lat (msec) : 2=0.01%, 4=1.12%, 10=71.81%, 20=26.88%, 50=0.17% 00:34:22.236 cpu : usr=4.27%, sys=7.36%, ctx=553, majf=0, minf=2 00:34:22.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:22.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:22.236 issued rwts: total=6656,7142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:22.236 00:34:22.236 Run status group 0 (all jobs): 00:34:22.236 READ: bw=93.2MiB/s (97.7MB/s), 17.3MiB/s-29.9MiB/s (18.1MB/s-31.4MB/s), io=93.9MiB (98.4MB), run=1003-1007msec 00:34:22.236 WRITE: bw=97.6MiB/s (102MB/s), 17.9MiB/s-30.3MiB/s (18.8MB/s-31.8MB/s), io=98.3MiB (103MB), run=1003-1007msec 00:34:22.236 00:34:22.236 Disk stats (read/write): 00:34:22.236 nvme0n1: ios=6494/6656, merge=0/0, ticks=24260/25260, in_queue=49520, util=88.08% 00:34:22.236 nvme0n2: ios=4586/4608, merge=0/0, ticks=36014/34573, in_queue=70587, util=87.86% 00:34:22.236 nvme0n3: ios=3679/4096, merge=0/0, ticks=22551/20123, in_queue=42674, util=96.83% 00:34:22.236 nvme0n4: ios=5632/6144, merge=0/0, ticks=50865/49464, in_queue=100329, util=89.52% 00:34:22.236 07:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:22.236 07:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3397723 00:34:22.236 07:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:22.236 07:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:22.236 [global] 00:34:22.236 thread=1 00:34:22.236 invalidate=1 00:34:22.236 rw=read 00:34:22.236 time_based=1 00:34:22.236 runtime=10 00:34:22.236 ioengine=libaio 00:34:22.236 direct=1 00:34:22.236 bs=4096 00:34:22.236 iodepth=1 00:34:22.236 norandommap=1 00:34:22.236 numjobs=1 00:34:22.236 00:34:22.236 [job0] 00:34:22.236 filename=/dev/nvme0n1 00:34:22.236 [job1] 00:34:22.236 filename=/dev/nvme0n2 00:34:22.236 [job2] 00:34:22.236 filename=/dev/nvme0n3 00:34:22.236 [job3] 00:34:22.236 filename=/dev/nvme0n4 00:34:22.236 Could not set queue depth (nvme0n1) 00:34:22.236 Could not set queue depth (nvme0n2) 00:34:22.236 Could not set queue depth (nvme0n3) 00:34:22.236 Could not set queue depth (nvme0n4) 00:34:22.497 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:22.497 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:22.497 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:22.497 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:22.497 fio-3.35 00:34:22.497 Starting 4 threads 00:34:25.044 07:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:25.044 07:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:25.044 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:34:25.044 fio: pid=3398096, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:25.305 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=278528, buflen=4096 00:34:25.305 fio: pid=3398095, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:25.305 07:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:25.305 07:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:25.566 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2772992, buflen=4096 00:34:25.566 fio: pid=3398082, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:25.566 07:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:25.566 07:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:25.566 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12689408, buflen=4096 00:34:25.566 fio: pid=3398093, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:25.828 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:25.828 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:25.828 00:34:25.828 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3398082: Wed Oct 16 07:17:25 2024 00:34:25.828 read: IOPS=227, BW=911KiB/s (932kB/s)(2708KiB/2974msec) 00:34:25.828 slat (usec): min=2, max=13330, avg=46.68, stdev=510.94 00:34:25.828 clat (usec): min=727, max=42070, avg=4307.09, stdev=10829.36 00:34:25.828 lat (usec): min=755, max=42097, avg=4353.80, stdev=10835.70 00:34:25.828 clat percentiles (usec): 00:34:25.828 | 1.00th=[ 840], 5.00th=[ 971], 10.00th=[ 1029], 20.00th=[ 1090], 00:34:25.828 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1205], 00:34:25.828 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1319], 95.00th=[41157], 00:34:25.828 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:25.828 | 99.99th=[42206] 00:34:25.828 bw ( KiB/s): min= 96, max= 1696, per=18.49%, avg=918.40, stdev=750.88, samples=5 00:34:25.828 iops : min= 24, max= 424, avg=229.60, stdev=187.72, samples=5 00:34:25.828 lat (usec) : 750=0.15%, 1000=7.52% 00:34:25.828 lat (msec) : 2=84.37%, 50=7.82% 00:34:25.828 cpu : usr=0.57%, sys=0.74%, ctx=680, majf=0, minf=1 00:34:25.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.828 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.828 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.828 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3398093: Wed Oct 16 07:17:25 2024 00:34:25.828 read: IOPS=984, BW=3938KiB/s (4032kB/s)(12.1MiB/3147msec) 00:34:25.828 slat (usec): min=6, max=23064, avg=38.68, stdev=450.49 00:34:25.828 clat (usec): min=347, max=1317, avg=967.12, stdev=84.68 00:34:25.828 lat (usec): min=375, max=24117, avg=1005.80, stdev=459.83 00:34:25.828 clat percentiles (usec): 00:34:25.828 | 1.00th=[ 725], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 914], 00:34:25.828 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:34:25.828 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:34:25.828 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1287], 00:34:25.828 | 99.99th=[ 1319] 00:34:25.828 bw ( KiB/s): min= 3787, max= 4072, per=79.76%, avg=3960.50, stdev=102.80, samples=6 00:34:25.828 iops : min= 946, max= 1018, avg=990.00, stdev=25.95, samples=6 00:34:25.828 lat (usec) : 500=0.03%, 750=1.39%, 1000=66.25% 00:34:25.828 lat (msec) : 2=32.30% 00:34:25.828 cpu : usr=2.64%, sys=3.27%, ctx=3103, majf=0, minf=2 00:34:25.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.828 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.828 issued rwts: total=3099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.828 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3398095: Wed Oct 16 07:17:25 2024 00:34:25.828 read: IOPS=24, BW=96.8KiB/s (99.2kB/s)(272KiB/2809msec) 00:34:25.828 slat (nsec): min=25444, max=39808, avg=26578.03, stdev=1649.74 00:34:25.828 clat (usec): min=1203, max=42129, avg=40956.01, stdev=4917.41 00:34:25.828 lat (usec): min=1243, max=42155, avg=40982.60, stdev=4915.78 00:34:25.828 clat percentiles (usec): 00:34:25.828 | 1.00th=[ 1205], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:25.828 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:34:25.828 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:25.828 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:25.828 | 99.99th=[42206] 00:34:25.828 bw ( KiB/s): min= 96, max= 104, per=1.95%, avg=97.60, stdev= 3.58, samples=5 00:34:25.828 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:34:25.828 lat (msec) : 2=1.45%, 50=97.10% 00:34:25.828 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=1 00:34:25.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.828 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.828 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.828 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3398096: Wed Oct 16 07:17:25 2024 00:34:25.828 read: IOPS=24, BW=96.0KiB/s (98.3kB/s)(252KiB/2625msec) 00:34:25.828 slat (nsec): min=20743, max=34601, avg=25412.97, stdev=1321.85 00:34:25.828 clat (usec): min=856, max=42246, avg=41274.32, stdev=5178.22 00:34:25.828 lat (usec): min=891, max=42272, avg=41299.73, stdev=5177.04 00:34:25.828 clat percentiles (usec): 00:34:25.828 | 1.00th=[ 857], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:34:25.828 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:25.828 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:25.828 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:25.828 | 99.99th=[42206] 00:34:25.828 bw ( KiB/s): min= 96, max= 96, per=1.93%, avg=96.00, stdev= 0.00, samples=5 00:34:25.828 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:34:25.828 lat (usec) : 1000=1.56% 00:34:25.828 lat (msec) : 50=96.88% 00:34:25.828 cpu : usr=0.11%, sys=0.00%, ctx=64, majf=0, minf=2 00:34:25.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.828 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.829 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.829 00:34:25.829 Run status group 0 (all jobs): 00:34:25.829 READ: bw=4965KiB/s (5084kB/s), 96.0KiB/s-3938KiB/s (98.3kB/s-4032kB/s), io=15.3MiB (16.0MB), run=2625-3147msec 00:34:25.829 00:34:25.829 Disk stats (read/write): 00:34:25.829 nvme0n1: ios=649/0, merge=0/0, ticks=2750/0, in_queue=2750, util=94.32% 00:34:25.829 nvme0n2: ios=3079/0, merge=0/0, ticks=3542/0, in_queue=3542, util=98.82% 00:34:25.829 nvme0n3: ios=92/0, merge=0/0, ticks=2724/0, in_queue=2724, util=99.15% 00:34:25.829 nvme0n4: ios=62/0, merge=0/0, ticks=2560/0, in_queue=2560, util=96.42% 00:34:25.829 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:25.829 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:26.089 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:26.089 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:26.350 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:26.350 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:26.350 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:26.350 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:26.610 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:26.610 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3397723 00:34:26.610 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:26.610 07:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:26.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:26.610 nvmf hotplug test: fio failed as expected 00:34:26.610 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:26.870 rmmod nvme_tcp 00:34:26.870 rmmod nvme_fabrics 00:34:26.870 rmmod nvme_keyring 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3394413 ']' 00:34:26.870 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3394413 00:34:26.871 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3394413 ']' 00:34:26.871 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3394413 00:34:26.871 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394413 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394413' 00:34:27.131 killing process with pid 3394413 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3394413 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3394413 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.131 07:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:29.675 00:34:29.675 real 0m28.182s 00:34:29.675 user 2m10.815s 00:34:29.675 sys 0m12.367s 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:29.675 ************************************ 00:34:29.675 END TEST nvmf_fio_target 00:34:29.675 ************************************ 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:29.675 ************************************ 00:34:29.675 START TEST nvmf_bdevio 00:34:29.675 ************************************ 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:29.675 * Looking for test storage... 00:34:29.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:29.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.675 --rc genhtml_branch_coverage=1 00:34:29.675 --rc genhtml_function_coverage=1 00:34:29.675 --rc genhtml_legend=1 00:34:29.675 --rc geninfo_all_blocks=1 00:34:29.675 --rc geninfo_unexecuted_blocks=1 00:34:29.675 00:34:29.675 ' 00:34:29.675 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:29.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.675 --rc genhtml_branch_coverage=1 00:34:29.675 --rc genhtml_function_coverage=1 00:34:29.675 --rc genhtml_legend=1 00:34:29.675 --rc geninfo_all_blocks=1 00:34:29.676 --rc geninfo_unexecuted_blocks=1 00:34:29.676 00:34:29.676 ' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:29.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.676 --rc genhtml_branch_coverage=1 00:34:29.676 --rc genhtml_function_coverage=1 00:34:29.676 --rc genhtml_legend=1 00:34:29.676 --rc geninfo_all_blocks=1 00:34:29.676 --rc geninfo_unexecuted_blocks=1 00:34:29.676 00:34:29.676 ' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:29.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.676 --rc genhtml_branch_coverage=1 00:34:29.676 --rc genhtml_function_coverage=1 00:34:29.676 --rc genhtml_legend=1 00:34:29.676 --rc geninfo_all_blocks=1 00:34:29.676 --rc geninfo_unexecuted_blocks=1 00:34:29.676 00:34:29.676 ' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:29.676 07:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.823 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:37.824 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:37.824 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:37.824 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:37.824 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:37.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:34:37.824 00:34:37.824 --- 10.0.0.2 ping statistics --- 00:34:37.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.824 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:37.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:34:37.824 00:34:37.824 --- 10.0.0.1 ping statistics --- 00:34:37.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.824 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3403113 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3403113 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:37.824 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3403113 ']' 00:34:37.825 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.825 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:37.825 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.825 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:37.825 07:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:37.825 [2024-10-16 07:17:36.476148] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:37.825 [2024-10-16 07:17:36.477262] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:34:37.825 [2024-10-16 07:17:36.477313] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.825 [2024-10-16 07:17:36.566628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:37.825 [2024-10-16 07:17:36.618743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:37.825 [2024-10-16 07:17:36.618795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:37.825 [2024-10-16 07:17:36.618803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:37.825 [2024-10-16 07:17:36.618811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:37.825 [2024-10-16 07:17:36.618817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:37.825 [2024-10-16 07:17:36.620873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:37.825 [2024-10-16 07:17:36.621006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:37.825 [2024-10-16 07:17:36.621260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:37.825 [2024-10-16 07:17:36.621263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:37.825 [2024-10-16 07:17:36.697736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:37.825 [2024-10-16 07:17:36.698990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:37.825 [2024-10-16 07:17:36.699065] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:37.825 [2024-10-16 07:17:36.699594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:37.825 [2024-10-16 07:17:36.699634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:37.825 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:37.825 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:34:37.825 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:37.825 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:37.825 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.087 [2024-10-16 07:17:37.354384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.087 Malloc0 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.087 [2024-10-16 07:17:37.442761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:38.087 { 00:34:38.087 "params": { 00:34:38.087 "name": "Nvme$subsystem", 00:34:38.087 "trtype": "$TEST_TRANSPORT", 00:34:38.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:38.087 "adrfam": "ipv4", 00:34:38.087 "trsvcid": "$NVMF_PORT", 00:34:38.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:38.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:38.087 "hdgst": ${hdgst:-false}, 00:34:38.087 "ddgst": ${ddgst:-false} 00:34:38.087 }, 00:34:38.087 "method": "bdev_nvme_attach_controller" 00:34:38.087 } 00:34:38.087 EOF 00:34:38.087 )") 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:34:38.087 07:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:38.087 "params": { 00:34:38.087 "name": "Nvme1", 00:34:38.087 "trtype": "tcp", 00:34:38.087 "traddr": "10.0.0.2", 00:34:38.087 "adrfam": "ipv4", 00:34:38.087 "trsvcid": "4420", 00:34:38.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:38.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:38.087 "hdgst": false, 00:34:38.087 "ddgst": false 00:34:38.087 }, 00:34:38.087 "method": "bdev_nvme_attach_controller" 00:34:38.087 }' 00:34:38.087 [2024-10-16 07:17:37.507682] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:34:38.087 [2024-10-16 07:17:37.507757] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403161 ] 00:34:38.349 [2024-10-16 07:17:37.593168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:38.349 [2024-10-16 07:17:37.650460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.349 [2024-10-16 07:17:37.650624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.349 [2024-10-16 07:17:37.650624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:38.610 I/O targets: 00:34:38.610 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:38.610 00:34:38.610 00:34:38.610 CUnit - A unit testing framework for C - Version 2.1-3 00:34:38.610 http://cunit.sourceforge.net/ 00:34:38.610 00:34:38.610 00:34:38.610 Suite: bdevio tests on: Nvme1n1 00:34:38.610 Test: blockdev write read block ...passed 00:34:38.610 Test: blockdev write zeroes read block ...passed 00:34:38.610 Test: blockdev write zeroes read no split ...passed 00:34:38.610 Test: blockdev write zeroes read split ...passed 00:34:38.610 Test: blockdev write zeroes read split partial ...passed 00:34:38.610 Test: blockdev reset ...[2024-10-16 07:17:38.025942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:38.610 [2024-10-16 07:17:38.026058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16390d0 (9): Bad file descriptor 00:34:38.610 [2024-10-16 07:17:38.038891] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:38.611 passed 00:34:38.611 Test: blockdev write read 8 blocks ...passed 00:34:38.611 Test: blockdev write read size > 128k ...passed 00:34:38.611 Test: blockdev write read invalid size ...passed 00:34:38.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:38.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:38.872 Test: blockdev write read max offset ...passed 00:34:38.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:38.872 Test: blockdev writev readv 8 blocks ...passed 00:34:38.872 Test: blockdev writev readv 30 x 1block ...passed 00:34:38.872 Test: blockdev writev readv block ...passed 00:34:38.872 Test: blockdev writev readv size > 128k ...passed 00:34:38.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:38.872 Test: blockdev comparev and writev ...[2024-10-16 07:17:38.264675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.872 [2024-10-16 07:17:38.264724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.264741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.872 [2024-10-16 07:17:38.264751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.265391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.872 [2024-10-16 07:17:38.265404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.265419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.872 [2024-10-16 07:17:38.265427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.266101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.872 [2024-10-16 07:17:38.266113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.266126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.872 [2024-10-16 07:17:38.266134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.266777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.872 [2024-10-16 07:17:38.266789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.266802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.872 [2024-10-16 07:17:38.266810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:38.872 passed 00:34:38.872 Test: blockdev nvme passthru rw ...passed 00:34:38.872 Test: blockdev nvme passthru vendor specific ...[2024-10-16 07:17:38.351753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:38.872 [2024-10-16 07:17:38.351771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.352153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:38.872 [2024-10-16 07:17:38.352165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.352576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:38.872 [2024-10-16 07:17:38.352587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:38.872 [2024-10-16 07:17:38.352963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:38.872 [2024-10-16 07:17:38.352976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:38.872 passed 00:34:38.872 Test: blockdev nvme admin passthru ...passed 00:34:39.133 Test: blockdev copy ...passed 00:34:39.133 00:34:39.133 Run Summary: Type Total Ran Passed Failed Inactive 00:34:39.133 suites 1 1 n/a 0 0 00:34:39.133 tests 23 23 23 0 0 00:34:39.133 asserts 152 152 152 0 n/a 00:34:39.133 00:34:39.133 Elapsed time = 1.121 seconds 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:39.133 rmmod nvme_tcp 00:34:39.133 rmmod nvme_fabrics 00:34:39.133 rmmod nvme_keyring 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3403113 ']' 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3403113 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3403113 ']' 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3403113 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:34:39.133 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3403113 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3403113' 00:34:39.395 killing process with pid 3403113 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3403113 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3403113 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.395 07:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.941 07:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:41.941 00:34:41.941 real 0m12.242s 00:34:41.941 user 0m9.630s 00:34:41.941 sys 0m6.374s 00:34:41.941 07:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:41.941 07:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:41.941 ************************************ 00:34:41.941 END TEST nvmf_bdevio 00:34:41.941 ************************************ 00:34:41.941 07:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:41.941 00:34:41.941 real 5m1.698s 00:34:41.941 user 10m10.613s 00:34:41.941 sys 2m6.383s 00:34:41.941 07:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:41.941 07:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:41.941 ************************************ 00:34:41.941 END TEST nvmf_target_core_interrupt_mode 00:34:41.942 ************************************ 00:34:41.942 07:17:41 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:41.942 07:17:41 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:41.942 07:17:41 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:41.942 07:17:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:41.942 ************************************ 00:34:41.942 START TEST nvmf_interrupt 00:34:41.942 ************************************ 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:41.942 * Looking for test storage... 00:34:41.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:41.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.942 --rc genhtml_branch_coverage=1 00:34:41.942 --rc genhtml_function_coverage=1 00:34:41.942 --rc genhtml_legend=1 00:34:41.942 --rc geninfo_all_blocks=1 00:34:41.942 --rc geninfo_unexecuted_blocks=1 00:34:41.942 00:34:41.942 ' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:41.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.942 --rc genhtml_branch_coverage=1 00:34:41.942 --rc genhtml_function_coverage=1 00:34:41.942 --rc genhtml_legend=1 00:34:41.942 --rc geninfo_all_blocks=1 00:34:41.942 --rc geninfo_unexecuted_blocks=1 00:34:41.942 00:34:41.942 ' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:41.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.942 --rc genhtml_branch_coverage=1 00:34:41.942 --rc genhtml_function_coverage=1 00:34:41.942 --rc genhtml_legend=1 00:34:41.942 --rc geninfo_all_blocks=1 00:34:41.942 --rc geninfo_unexecuted_blocks=1 00:34:41.942 00:34:41.942 ' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:41.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.942 --rc genhtml_branch_coverage=1 00:34:41.942 --rc genhtml_function_coverage=1 00:34:41.942 --rc genhtml_legend=1 00:34:41.942 --rc geninfo_all_blocks=1 00:34:41.942 --rc geninfo_unexecuted_blocks=1 00:34:41.942 00:34:41.942 ' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.942 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:41.943 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:41.943 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:41.943 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.943 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:41.943 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.943 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:41.943 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:41.943 07:17:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:41.943 07:17:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:50.085 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.085 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:50.086 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:50.086 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:50.086 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:50.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:34:50.086 00:34:50.086 --- 10.0.0.2 ping statistics --- 00:34:50.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.086 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:34:50.086 00:34:50.086 --- 10.0.0.1 ping statistics --- 00:34:50.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.086 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=3407588 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 3407588 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3407588 ']' 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:50.086 07:17:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.086 [2024-10-16 07:17:48.817939] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:50.086 [2024-10-16 07:17:48.819077] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:34:50.086 [2024-10-16 07:17:48.819128] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.086 [2024-10-16 07:17:48.907011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:50.086 [2024-10-16 07:17:48.959051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.086 [2024-10-16 07:17:48.959100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.086 [2024-10-16 07:17:48.959110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.086 [2024-10-16 07:17:48.959117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.086 [2024-10-16 07:17:48.959124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.086 [2024-10-16 07:17:48.960893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.086 [2024-10-16 07:17:48.960960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.086 [2024-10-16 07:17:49.038085] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:50.086 [2024-10-16 07:17:49.038894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:50.086 [2024-10-16 07:17:49.039098] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:50.348 5000+0 records in 00:34:50.348 5000+0 records out 00:34:50.348 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0188437 s, 543 MB/s 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.348 AIO0 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.348 [2024-10-16 07:17:49.757939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:50.348 [2024-10-16 07:17:49.802399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3407588 0 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3407588 0 idle 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3407588 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3407588 -w 256 00:34:50.348 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3407588 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.30 reactor_0' 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3407588 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.30 reactor_0 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3407588 1 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3407588 1 idle 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3407588 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:50.610 07:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:50.610 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3407588 -w 256 00:34:50.610 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3407633 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3407633 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3407866 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3407588 0 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3407588 0 busy 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3407588 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3407588 -w 256 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:50.871 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3407588 root 20 0 128.2g 44928 32256 R 66.7 0.0 0:00.41 reactor_0' 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3407588 root 20 0 128.2g 44928 32256 R 66.7 0.0 0:00.41 reactor_0 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3407588 1 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3407588 1 busy 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3407588 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3407588 -w 256 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3407633 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1' 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3407633 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:51.133 07:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3407866 00:35:01.135 Initializing NVMe Controllers 00:35:01.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:01.135 Controller IO queue size 256, less than required. 00:35:01.135 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:01.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:01.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:01.135 Initialization complete. Launching workers. 00:35:01.135 ======================================================== 00:35:01.135 Latency(us) 00:35:01.135 Device Information : IOPS MiB/s Average min max 00:35:01.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19494.10 76.15 13136.59 3989.68 32878.55 00:35:01.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 17992.00 70.28 14230.99 7791.82 51888.30 00:35:01.135 ======================================================== 00:35:01.135 Total : 37486.10 146.43 13661.86 3989.68 51888.30 00:35:01.135 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3407588 0 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3407588 0 idle 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3407588 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3407588 -w 256 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3407588 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.28 reactor_0' 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3407588 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.28 reactor_0 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3407588 1 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3407588 1 idle 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3407588 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3407588 -w 256 00:35:01.135 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3407633 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3407633 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.395 07:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:02.339 07:18:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:02.339 07:18:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:35:02.339 07:18:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:35:02.339 07:18:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:35:02.339 07:18:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3407588 0 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3407588 0 idle 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3407588 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3407588 -w 256 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3407588 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0' 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3407588 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3407588 1 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3407588 1 idle 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3407588 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3407588 -w 256 00:35:04.255 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3407633 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3407633 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:04.541 07:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:04.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:04.541 07:18:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:04.541 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:35:04.541 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:35:04.541 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:04.802 rmmod nvme_tcp 00:35:04.802 rmmod nvme_fabrics 00:35:04.802 rmmod nvme_keyring 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 3407588 ']' 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 3407588 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3407588 ']' 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3407588 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3407588 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3407588' 00:35:04.802 killing process with pid 3407588 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3407588 00:35:04.802 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3407588 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:05.063 07:18:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.979 07:18:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:06.979 00:35:06.979 real 0m25.344s 00:35:06.979 user 0m40.380s 00:35:06.979 sys 0m9.670s 00:35:06.979 07:18:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:06.979 07:18:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:06.979 ************************************ 00:35:06.979 END TEST nvmf_interrupt 00:35:06.979 ************************************ 00:35:06.979 00:35:06.979 real 29m50.746s 00:35:06.979 user 61m8.812s 00:35:06.979 sys 10m9.953s 00:35:06.979 07:18:06 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:06.979 07:18:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:06.979 ************************************ 00:35:06.979 END TEST nvmf_tcp 00:35:06.979 ************************************ 00:35:07.240 07:18:06 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:07.240 07:18:06 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:07.240 07:18:06 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:07.240 07:18:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:07.240 07:18:06 -- common/autotest_common.sh@10 -- # set +x 00:35:07.240 ************************************ 00:35:07.240 START TEST spdkcli_nvmf_tcp 00:35:07.240 ************************************ 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:07.240 * Looking for test storage... 00:35:07.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.240 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:07.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.241 --rc genhtml_branch_coverage=1 00:35:07.241 --rc genhtml_function_coverage=1 00:35:07.241 --rc genhtml_legend=1 00:35:07.241 --rc geninfo_all_blocks=1 00:35:07.241 --rc geninfo_unexecuted_blocks=1 00:35:07.241 00:35:07.241 ' 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:07.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.241 --rc genhtml_branch_coverage=1 00:35:07.241 --rc genhtml_function_coverage=1 00:35:07.241 --rc genhtml_legend=1 00:35:07.241 --rc geninfo_all_blocks=1 00:35:07.241 --rc geninfo_unexecuted_blocks=1 00:35:07.241 00:35:07.241 ' 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:07.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.241 --rc genhtml_branch_coverage=1 00:35:07.241 --rc genhtml_function_coverage=1 00:35:07.241 --rc genhtml_legend=1 00:35:07.241 --rc geninfo_all_blocks=1 00:35:07.241 --rc geninfo_unexecuted_blocks=1 00:35:07.241 00:35:07.241 ' 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:07.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.241 --rc genhtml_branch_coverage=1 00:35:07.241 --rc genhtml_function_coverage=1 00:35:07.241 --rc genhtml_legend=1 00:35:07.241 --rc geninfo_all_blocks=1 00:35:07.241 --rc geninfo_unexecuted_blocks=1 00:35:07.241 00:35:07.241 ' 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.241 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:07.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3411175 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3411175 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3411175 ']' 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:07.502 07:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:07.502 [2024-10-16 07:18:06.826064] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:35:07.502 [2024-10-16 07:18:06.826118] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411175 ] 00:35:07.502 [2024-10-16 07:18:06.897386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:07.502 [2024-10-16 07:18:06.944322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.502 [2024-10-16 07:18:06.944327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.445 07:18:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:08.445 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:08.445 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:08.445 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:08.445 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:08.445 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:08.445 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:08.445 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:08.445 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:08.445 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:08.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:08.445 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:08.445 ' 00:35:10.991 [2024-10-16 07:18:10.375093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.373 [2024-10-16 07:18:11.731377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:14.917 [2024-10-16 07:18:14.258430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:17.460 [2024-10-16 07:18:16.476812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:18.941 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:18.941 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:18.941 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:18.941 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:18.941 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:18.941 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:18.941 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:18.941 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:18.941 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:18.941 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:18.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:18.941 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:18.941 07:18:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:18.941 07:18:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:18.941 07:18:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.941 07:18:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:18.941 07:18:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:18.941 07:18:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.941 07:18:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:18.941 07:18:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:19.223 07:18:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:19.505 07:18:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:19.505 07:18:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:19.505 07:18:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:19.505 07:18:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.505 07:18:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:19.505 07:18:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:19.505 07:18:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.505 07:18:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:19.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:19.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:19.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:19.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:19.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:19.505 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:19.505 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:19.505 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:19.505 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:19.505 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:19.505 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:19.505 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:19.505 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:19.505 ' 00:35:26.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:26.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:26.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:26.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:26.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:26.090 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:26.090 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:26.090 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:26.090 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:26.090 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:26.090 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:26.090 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:26.090 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:26.090 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3411175 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3411175 ']' 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3411175 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3411175 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3411175' 00:35:26.090 killing process with pid 3411175 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3411175 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3411175 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3411175 ']' 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3411175 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3411175 ']' 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3411175 00:35:26.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3411175) - No such process 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3411175 is not found' 00:35:26.090 Process with pid 3411175 is not found 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:26.090 00:35:26.090 real 0m18.097s 00:35:26.090 user 0m40.236s 00:35:26.090 sys 0m0.866s 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:26.090 07:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:26.090 ************************************ 00:35:26.090 END TEST spdkcli_nvmf_tcp 00:35:26.090 ************************************ 00:35:26.090 07:18:24 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:26.090 07:18:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:26.090 07:18:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:26.090 07:18:24 -- common/autotest_common.sh@10 -- # set +x 00:35:26.090 ************************************ 00:35:26.090 START TEST nvmf_identify_passthru 00:35:26.090 ************************************ 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:26.090 * Looking for test storage... 00:35:26.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:26.090 07:18:24 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.090 --rc genhtml_branch_coverage=1 00:35:26.090 --rc genhtml_function_coverage=1 00:35:26.090 --rc genhtml_legend=1 00:35:26.090 --rc geninfo_all_blocks=1 00:35:26.090 --rc geninfo_unexecuted_blocks=1 00:35:26.090 00:35:26.090 ' 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.090 --rc genhtml_branch_coverage=1 00:35:26.090 --rc genhtml_function_coverage=1 00:35:26.090 --rc genhtml_legend=1 00:35:26.090 --rc geninfo_all_blocks=1 00:35:26.090 --rc geninfo_unexecuted_blocks=1 00:35:26.090 00:35:26.090 ' 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.090 --rc genhtml_branch_coverage=1 00:35:26.090 --rc genhtml_function_coverage=1 00:35:26.090 --rc genhtml_legend=1 00:35:26.090 --rc geninfo_all_blocks=1 00:35:26.090 --rc geninfo_unexecuted_blocks=1 00:35:26.090 00:35:26.090 ' 00:35:26.090 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:26.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.090 --rc genhtml_branch_coverage=1 00:35:26.090 --rc genhtml_function_coverage=1 00:35:26.090 --rc genhtml_legend=1 00:35:26.090 --rc geninfo_all_blocks=1 00:35:26.090 --rc geninfo_unexecuted_blocks=1 00:35:26.090 00:35:26.090 ' 00:35:26.090 07:18:24 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.090 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:26.090 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.090 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.090 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.090 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.090 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.090 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.090 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.091 07:18:24 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.091 07:18:24 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.091 07:18:24 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.091 07:18:24 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:26.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:26.091 07:18:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.091 07:18:24 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.091 07:18:24 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.091 07:18:24 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.091 07:18:24 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:26.091 07:18:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.091 07:18:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.091 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:26.091 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:26.091 07:18:24 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:26.091 07:18:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:32.678 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.678 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:32.679 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:32.679 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:32.679 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.679 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:32.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:35:32.940 00:35:32.940 --- 10.0.0.2 ping statistics --- 00:35:32.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.940 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:35:32.940 00:35:32.940 --- 10.0.0.1 ping statistics --- 00:35:32.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.940 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:32.940 07:18:32 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:33.201 07:18:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.201 07:18:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:33.201 07:18:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:33.201 07:18:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:33.201 07:18:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:33.201 07:18:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:33.201 07:18:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:33.201 07:18:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:33.772 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:33.772 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:33.772 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:33.772 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:34.343 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:34.343 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:34.343 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:34.343 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:34.343 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:34.344 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.344 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:34.344 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3419025 00:35:34.344 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:34.344 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:34.344 07:18:33 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3419025 00:35:34.344 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3419025 ']' 00:35:34.344 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.344 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:34.344 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.344 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:34.344 07:18:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:34.344 [2024-10-16 07:18:33.653875] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:35:34.344 [2024-10-16 07:18:33.653943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.344 [2024-10-16 07:18:33.742893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:34.344 [2024-10-16 07:18:33.797139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.344 [2024-10-16 07:18:33.797196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.344 [2024-10-16 07:18:33.797205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:34.344 [2024-10-16 07:18:33.797212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:34.344 [2024-10-16 07:18:33.797219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.344 [2024-10-16 07:18:33.799456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.344 [2024-10-16 07:18:33.799619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:34.344 [2024-10-16 07:18:33.799747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.344 [2024-10-16 07:18:33.799747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:35.283 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:35.283 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:35:35.283 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:35.283 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.283 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.283 INFO: Log level set to 20 00:35:35.283 INFO: Requests: 00:35:35.283 { 00:35:35.283 "jsonrpc": "2.0", 00:35:35.283 "method": "nvmf_set_config", 00:35:35.283 "id": 1, 00:35:35.283 "params": { 00:35:35.283 "admin_cmd_passthru": { 00:35:35.283 "identify_ctrlr": true 00:35:35.283 } 00:35:35.283 } 00:35:35.283 } 00:35:35.283 00:35:35.283 INFO: response: 00:35:35.283 { 00:35:35.283 "jsonrpc": "2.0", 00:35:35.283 "id": 1, 00:35:35.283 "result": true 00:35:35.283 } 00:35:35.283 00:35:35.283 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.283 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:35.283 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.283 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.283 INFO: Setting log level to 20 00:35:35.283 INFO: Setting log level to 20 00:35:35.283 INFO: Log level set to 20 00:35:35.283 INFO: Log level set to 20 00:35:35.283 INFO: Requests: 00:35:35.283 { 00:35:35.283 "jsonrpc": "2.0", 00:35:35.283 "method": "framework_start_init", 00:35:35.283 "id": 1 00:35:35.283 } 00:35:35.283 00:35:35.283 INFO: Requests: 00:35:35.283 { 00:35:35.284 "jsonrpc": "2.0", 00:35:35.284 "method": "framework_start_init", 00:35:35.284 "id": 1 00:35:35.284 } 00:35:35.284 00:35:35.284 [2024-10-16 07:18:34.544320] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:35.284 INFO: response: 00:35:35.284 { 00:35:35.284 "jsonrpc": "2.0", 00:35:35.284 "id": 1, 00:35:35.284 "result": true 00:35:35.284 } 00:35:35.284 00:35:35.284 INFO: response: 00:35:35.284 { 00:35:35.284 "jsonrpc": "2.0", 00:35:35.284 "id": 1, 00:35:35.284 "result": true 00:35:35.284 } 00:35:35.284 00:35:35.284 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.284 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:35.284 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.284 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.284 INFO: Setting log level to 40 00:35:35.284 INFO: Setting log level to 40 00:35:35.284 INFO: Setting log level to 40 00:35:35.284 [2024-10-16 07:18:34.557659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.284 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.284 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:35.284 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:35.284 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.284 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:35.284 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.284 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.544 Nvme0n1 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.544 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.544 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.544 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.544 [2024-10-16 07:18:34.954026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.544 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:35.544 [ 00:35:35.544 { 00:35:35.544 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:35.544 "subtype": "Discovery", 00:35:35.544 "listen_addresses": [], 00:35:35.544 "allow_any_host": true, 00:35:35.544 "hosts": [] 00:35:35.544 }, 00:35:35.544 { 00:35:35.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:35.544 "subtype": "NVMe", 00:35:35.544 "listen_addresses": [ 00:35:35.544 { 00:35:35.544 "trtype": "TCP", 00:35:35.544 "adrfam": "IPv4", 00:35:35.544 "traddr": "10.0.0.2", 00:35:35.544 "trsvcid": "4420" 00:35:35.544 } 00:35:35.544 ], 00:35:35.544 "allow_any_host": true, 00:35:35.544 "hosts": [], 00:35:35.544 "serial_number": "SPDK00000000000001", 00:35:35.544 "model_number": "SPDK bdev Controller", 00:35:35.544 "max_namespaces": 1, 00:35:35.544 "min_cntlid": 1, 00:35:35.544 "max_cntlid": 65519, 00:35:35.544 "namespaces": [ 00:35:35.544 { 00:35:35.544 "nsid": 1, 00:35:35.544 "bdev_name": "Nvme0n1", 00:35:35.544 "name": "Nvme0n1", 00:35:35.544 "nguid": "36344730526054870025384500000044", 00:35:35.544 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:35.544 } 00:35:35.544 ] 00:35:35.544 } 00:35:35.544 ] 00:35:35.544 07:18:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.544 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:35.544 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:35.544 07:18:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:35.805 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:35.805 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:35.805 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:35.805 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:36.065 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:36.065 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:36.065 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:36.065 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:36.065 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.065 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.066 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:36.066 07:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.066 rmmod nvme_tcp 00:35:36.066 rmmod nvme_fabrics 00:35:36.066 rmmod nvme_keyring 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 3419025 ']' 00:35:36.066 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 3419025 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3419025 ']' 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3419025 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3419025 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3419025' 00:35:36.066 killing process with pid 3419025 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3419025 00:35:36.066 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3419025 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:36.327 07:18:35 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.327 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:36.327 07:18:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.873 07:18:37 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:38.873 00:35:38.873 real 0m13.148s 00:35:38.873 user 0m10.109s 00:35:38.873 sys 0m6.772s 00:35:38.873 07:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:38.873 07:18:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:38.873 ************************************ 00:35:38.873 END TEST nvmf_identify_passthru 00:35:38.873 ************************************ 00:35:38.873 07:18:37 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:38.873 07:18:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:38.873 07:18:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:38.873 07:18:37 -- common/autotest_common.sh@10 -- # set +x 00:35:38.873 ************************************ 00:35:38.873 START TEST nvmf_dif 00:35:38.873 ************************************ 00:35:38.873 07:18:37 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:38.873 * Looking for test storage... 00:35:38.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:38.873 07:18:38 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:38.873 07:18:38 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:38.873 07:18:38 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:38.873 07:18:38 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:38.873 07:18:38 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.873 07:18:38 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:38.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.873 --rc genhtml_branch_coverage=1 00:35:38.873 --rc genhtml_function_coverage=1 00:35:38.873 --rc genhtml_legend=1 00:35:38.873 --rc geninfo_all_blocks=1 00:35:38.873 --rc geninfo_unexecuted_blocks=1 00:35:38.873 00:35:38.873 ' 00:35:38.873 07:18:38 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:38.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.873 --rc genhtml_branch_coverage=1 00:35:38.873 --rc genhtml_function_coverage=1 00:35:38.873 --rc genhtml_legend=1 00:35:38.873 --rc geninfo_all_blocks=1 00:35:38.873 --rc geninfo_unexecuted_blocks=1 00:35:38.873 00:35:38.873 ' 00:35:38.873 07:18:38 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:38.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.873 --rc genhtml_branch_coverage=1 00:35:38.873 --rc genhtml_function_coverage=1 00:35:38.873 --rc genhtml_legend=1 00:35:38.873 --rc geninfo_all_blocks=1 00:35:38.873 --rc geninfo_unexecuted_blocks=1 00:35:38.873 00:35:38.873 ' 00:35:38.873 07:18:38 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:38.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.873 --rc genhtml_branch_coverage=1 00:35:38.873 --rc genhtml_function_coverage=1 00:35:38.873 --rc genhtml_legend=1 00:35:38.873 --rc geninfo_all_blocks=1 00:35:38.873 --rc geninfo_unexecuted_blocks=1 00:35:38.873 00:35:38.873 ' 00:35:38.873 07:18:38 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.873 07:18:38 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.873 07:18:38 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.874 07:18:38 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.874 07:18:38 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.874 07:18:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.874 07:18:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.874 07:18:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.874 07:18:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:38.874 07:18:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:38.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.874 07:18:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:38.874 07:18:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:38.874 07:18:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:38.874 07:18:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:38.874 07:18:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.874 07:18:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:38.874 07:18:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:38.874 07:18:38 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.874 07:18:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:47.018 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:47.018 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:47.018 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:47.018 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:47.018 07:18:45 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:47.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:47.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:35:47.019 00:35:47.019 --- 10.0.0.2 ping statistics --- 00:35:47.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.019 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:47.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:47.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:35:47.019 00:35:47.019 --- 10.0.0.1 ping statistics --- 00:35:47.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.019 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:47.019 07:18:45 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:49.565 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:49.565 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:49.565 07:18:49 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:49.565 07:18:49 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:49.565 07:18:49 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:49.565 07:18:49 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:49.565 07:18:49 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:49.565 07:18:49 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:49.826 07:18:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:49.826 07:18:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:49.826 07:18:49 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:49.826 07:18:49 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:49.826 07:18:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:49.826 07:18:49 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=3425184 00:35:49.826 07:18:49 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 3425184 00:35:49.826 07:18:49 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:49.826 07:18:49 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3425184 ']' 00:35:49.826 07:18:49 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.826 07:18:49 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:49.826 07:18:49 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.826 07:18:49 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:49.826 07:18:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:49.826 [2024-10-16 07:18:49.169632] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:35:49.826 [2024-10-16 07:18:49.169681] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:49.826 [2024-10-16 07:18:49.253953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.826 [2024-10-16 07:18:49.289629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:49.826 [2024-10-16 07:18:49.289661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:49.826 [2024-10-16 07:18:49.289669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:49.826 [2024-10-16 07:18:49.289676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:49.826 [2024-10-16 07:18:49.289682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:49.826 [2024-10-16 07:18:49.290261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:50.770 07:18:49 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:50.770 07:18:49 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:50.770 07:18:49 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:50.770 07:18:49 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:50.770 07:18:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.770 07:18:50 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:50.770 07:18:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:50.770 07:18:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:50.770 07:18:50 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.770 07:18:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.770 [2024-10-16 07:18:50.034694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:50.770 07:18:50 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.770 07:18:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:50.770 07:18:50 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:50.770 07:18:50 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.770 07:18:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.770 ************************************ 00:35:50.770 START TEST fio_dif_1_default 00:35:50.770 ************************************ 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.770 bdev_null0 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:50.770 [2024-10-16 07:18:50.123087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:50.770 { 00:35:50.770 "params": { 00:35:50.770 "name": "Nvme$subsystem", 00:35:50.770 "trtype": "$TEST_TRANSPORT", 00:35:50.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:50.770 "adrfam": "ipv4", 00:35:50.770 "trsvcid": "$NVMF_PORT", 00:35:50.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:50.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:50.770 "hdgst": ${hdgst:-false}, 00:35:50.770 "ddgst": ${ddgst:-false} 00:35:50.770 }, 00:35:50.770 "method": "bdev_nvme_attach_controller" 00:35:50.770 } 00:35:50.770 EOF 00:35:50.770 )") 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:50.770 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:50.771 "params": { 00:35:50.771 "name": "Nvme0", 00:35:50.771 "trtype": "tcp", 00:35:50.771 "traddr": "10.0.0.2", 00:35:50.771 "adrfam": "ipv4", 00:35:50.771 "trsvcid": "4420", 00:35:50.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.771 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.771 "hdgst": false, 00:35:50.771 "ddgst": false 00:35:50.771 }, 00:35:50.771 "method": "bdev_nvme_attach_controller" 00:35:50.771 }' 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:50.771 07:18:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.031 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:51.031 fio-3.35 00:35:51.031 Starting 1 thread 00:36:03.258 00:36:03.258 filename0: (groupid=0, jobs=1): err= 0: pid=3425728: Wed Oct 16 07:19:01 2024 00:36:03.258 read: IOPS=191, BW=765KiB/s (783kB/s)(7664KiB/10021msec) 00:36:03.258 slat (nsec): min=5646, max=62534, avg=6415.52, stdev=1875.93 00:36:03.258 clat (usec): min=564, max=43155, avg=20901.88, stdev=20177.07 00:36:03.258 lat (usec): min=572, max=43191, avg=20908.30, stdev=20177.04 00:36:03.258 clat percentiles (usec): 00:36:03.258 | 1.00th=[ 693], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 840], 00:36:03.258 | 30.00th=[ 865], 40.00th=[ 889], 50.00th=[ 1037], 60.00th=[41157], 00:36:03.258 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:03.258 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:03.258 | 99.99th=[43254] 00:36:03.258 bw ( KiB/s): min= 704, max= 896, per=99.90%, avg=764.80, stdev=38.71, samples=20 00:36:03.258 iops : min= 176, max= 224, avg=191.20, stdev= 9.68, samples=20 00:36:03.258 lat (usec) : 750=2.51%, 1000=46.29% 00:36:03.258 lat (msec) : 2=1.51%, 50=49.69% 00:36:03.258 cpu : usr=93.44%, sys=6.33%, ctx=14, majf=0, minf=270 00:36:03.258 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.258 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.258 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:03.258 00:36:03.259 Run status group 0 (all jobs): 00:36:03.259 READ: bw=765KiB/s (783kB/s), 765KiB/s-765KiB/s (783kB/s-783kB/s), io=7664KiB (7848kB), run=10021-10021msec 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 00:36:03.259 real 0m11.213s 00:36:03.259 user 0m18.679s 00:36:03.259 sys 0m1.050s 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 ************************************ 00:36:03.259 END TEST fio_dif_1_default 00:36:03.259 ************************************ 00:36:03.259 07:19:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:03.259 07:19:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:03.259 07:19:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 ************************************ 00:36:03.259 START TEST fio_dif_1_multi_subsystems 00:36:03.259 ************************************ 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 bdev_null0 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 [2024-10-16 07:19:01.419817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 bdev_null1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:03.259 { 00:36:03.259 "params": { 00:36:03.259 "name": "Nvme$subsystem", 00:36:03.259 "trtype": "$TEST_TRANSPORT", 00:36:03.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.259 "adrfam": "ipv4", 00:36:03.259 "trsvcid": "$NVMF_PORT", 00:36:03.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.259 "hdgst": ${hdgst:-false}, 00:36:03.259 "ddgst": ${ddgst:-false} 00:36:03.259 }, 00:36:03.259 "method": "bdev_nvme_attach_controller" 00:36:03.259 } 00:36:03.259 EOF 00:36:03.259 )") 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:03.259 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:03.259 { 00:36:03.259 "params": { 00:36:03.259 "name": "Nvme$subsystem", 00:36:03.259 "trtype": "$TEST_TRANSPORT", 00:36:03.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.259 "adrfam": "ipv4", 00:36:03.259 "trsvcid": "$NVMF_PORT", 00:36:03.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.259 "hdgst": ${hdgst:-false}, 00:36:03.259 "ddgst": ${ddgst:-false} 00:36:03.259 }, 00:36:03.259 "method": "bdev_nvme_attach_controller" 00:36:03.259 } 00:36:03.259 EOF 00:36:03.259 )") 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:03.260 "params": { 00:36:03.260 "name": "Nvme0", 00:36:03.260 "trtype": "tcp", 00:36:03.260 "traddr": "10.0.0.2", 00:36:03.260 "adrfam": "ipv4", 00:36:03.260 "trsvcid": "4420", 00:36:03.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.260 "hdgst": false, 00:36:03.260 "ddgst": false 00:36:03.260 }, 00:36:03.260 "method": "bdev_nvme_attach_controller" 00:36:03.260 },{ 00:36:03.260 "params": { 00:36:03.260 "name": "Nvme1", 00:36:03.260 "trtype": "tcp", 00:36:03.260 "traddr": "10.0.0.2", 00:36:03.260 "adrfam": "ipv4", 00:36:03.260 "trsvcid": "4420", 00:36:03.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:03.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:03.260 "hdgst": false, 00:36:03.260 "ddgst": false 00:36:03.260 }, 00:36:03.260 "method": "bdev_nvme_attach_controller" 00:36:03.260 }' 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:03.260 07:19:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.260 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:03.260 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:03.260 fio-3.35 00:36:03.260 Starting 2 threads 00:36:13.262 00:36:13.262 filename0: (groupid=0, jobs=1): err= 0: pid=3427937: Wed Oct 16 07:19:12 2024 00:36:13.262 read: IOPS=191, BW=766KiB/s (785kB/s)(7664KiB/10002msec) 00:36:13.262 slat (nsec): min=5644, max=31818, avg=6643.68, stdev=2163.92 00:36:13.262 clat (usec): min=638, max=41804, avg=20861.18, stdev=20173.51 00:36:13.262 lat (usec): min=646, max=41830, avg=20867.83, stdev=20173.35 00:36:13.262 clat percentiles (usec): 00:36:13.262 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 791], 20.00th=[ 807], 00:36:13.262 | 30.00th=[ 824], 40.00th=[ 840], 50.00th=[ 1045], 60.00th=[41157], 00:36:13.262 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:13.262 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:13.262 | 99.99th=[41681] 00:36:13.262 bw ( KiB/s): min= 704, max= 832, per=66.44%, avg=768.00, stdev=21.33, samples=19 00:36:13.262 iops : min= 176, max= 208, avg=192.00, stdev= 5.33, samples=19 00:36:13.262 lat (usec) : 750=3.34%, 1000=46.24% 00:36:13.262 lat (msec) : 2=0.73%, 50=49.69% 00:36:13.262 cpu : usr=95.69%, sys=4.10%, ctx=14, majf=0, minf=141 00:36:13.262 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.262 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.262 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:13.262 filename1: (groupid=0, jobs=1): err= 0: pid=3427938: Wed Oct 16 07:19:12 2024 00:36:13.262 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:36:13.262 slat (nsec): min=5646, max=31352, avg=6468.12, stdev=1444.49 00:36:13.262 clat (usec): min=40884, max=42191, avg=40991.44, stdev=104.97 00:36:13.262 lat (usec): min=40889, max=42222, avg=40997.91, stdev=105.42 00:36:13.262 clat percentiles (usec): 00:36:13.262 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:13.262 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:13.262 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:13.262 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:13.263 | 99.99th=[42206] 00:36:13.263 bw ( KiB/s): min= 384, max= 416, per=33.56%, avg=388.80, stdev=11.72, samples=20 00:36:13.263 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:13.263 lat (msec) : 50=100.00% 00:36:13.263 cpu : usr=95.24%, sys=4.55%, ctx=29, majf=0, minf=135 00:36:13.263 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.263 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.263 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:13.263 00:36:13.263 Run status group 0 (all jobs): 00:36:13.263 READ: bw=1156KiB/s (1184kB/s), 390KiB/s-766KiB/s (399kB/s-785kB/s), io=11.3MiB (11.8MB), run=10002-10007msec 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.263 00:36:13.263 real 0m11.307s 00:36:13.263 user 0m36.597s 00:36:13.263 sys 0m1.179s 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:13.263 07:19:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:13.263 ************************************ 00:36:13.263 END TEST fio_dif_1_multi_subsystems 00:36:13.263 ************************************ 00:36:13.263 07:19:12 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:13.263 07:19:12 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:13.263 07:19:12 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:13.263 07:19:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:13.524 ************************************ 00:36:13.524 START TEST fio_dif_rand_params 00:36:13.524 ************************************ 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.524 bdev_null0 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.524 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.525 [2024-10-16 07:19:12.810652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:13.525 { 00:36:13.525 "params": { 00:36:13.525 "name": "Nvme$subsystem", 00:36:13.525 "trtype": "$TEST_TRANSPORT", 00:36:13.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.525 "adrfam": "ipv4", 00:36:13.525 "trsvcid": "$NVMF_PORT", 00:36:13.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.525 "hdgst": ${hdgst:-false}, 00:36:13.525 "ddgst": ${ddgst:-false} 00:36:13.525 }, 00:36:13.525 "method": "bdev_nvme_attach_controller" 00:36:13.525 } 00:36:13.525 EOF 00:36:13.525 )") 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:13.525 "params": { 00:36:13.525 "name": "Nvme0", 00:36:13.525 "trtype": "tcp", 00:36:13.525 "traddr": "10.0.0.2", 00:36:13.525 "adrfam": "ipv4", 00:36:13.525 "trsvcid": "4420", 00:36:13.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:13.525 "hdgst": false, 00:36:13.525 "ddgst": false 00:36:13.525 }, 00:36:13.525 "method": "bdev_nvme_attach_controller" 00:36:13.525 }' 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:13.525 07:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.786 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:13.786 ... 00:36:13.786 fio-3.35 00:36:13.786 Starting 3 threads 00:36:20.374 00:36:20.374 filename0: (groupid=0, jobs=1): err= 0: pid=3430138: Wed Oct 16 07:19:18 2024 00:36:20.374 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(109MiB/5043msec) 00:36:20.374 slat (nsec): min=8329, max=31571, avg=9065.15, stdev=1233.27 00:36:20.374 clat (msec): min=4, max=132, avg=17.30, stdev=21.19 00:36:20.374 lat (msec): min=4, max=132, avg=17.30, stdev=21.19 00:36:20.374 clat percentiles (msec): 00:36:20.374 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:36:20.374 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:36:20.374 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 50], 95.00th=[ 51], 00:36:20.374 | 99.00th=[ 91], 99.50th=[ 129], 99.90th=[ 133], 99.95th=[ 133], 00:36:20.374 | 99.99th=[ 133] 00:36:20.374 bw ( KiB/s): min=18176, max=30464, per=20.64%, avg=22297.60, stdev=4456.08, samples=10 00:36:20.374 iops : min= 142, max= 238, avg=174.20, stdev=34.81, samples=10 00:36:20.374 lat (msec) : 10=78.60%, 20=2.86%, 50=13.73%, 100=4.23%, 250=0.57% 00:36:20.374 cpu : usr=95.60%, sys=4.18%, ctx=8, majf=0, minf=51 00:36:20.374 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.374 issued rwts: total=874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:20.375 filename0: (groupid=0, jobs=1): err= 0: pid=3430139: Wed Oct 16 07:19:18 2024 00:36:20.375 read: IOPS=340, BW=42.6MiB/s (44.7MB/s)(215MiB/5045msec) 00:36:20.375 slat (nsec): min=5672, max=30985, avg=6346.42, stdev=1406.65 00:36:20.375 clat (usec): min=4517, max=50548, avg=8766.62, stdev=5565.43 00:36:20.375 lat (usec): min=4523, max=50554, avg=8772.97, stdev=5565.45 00:36:20.375 clat percentiles (usec): 00:36:20.375 | 1.00th=[ 5145], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6652], 00:36:20.375 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:36:20.375 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[10814], 00:36:20.375 | 99.00th=[47449], 99.50th=[49021], 99.90th=[49546], 99.95th=[50594], 00:36:20.375 | 99.99th=[50594] 00:36:20.375 bw ( KiB/s): min=23086, max=50176, per=40.69%, avg=43959.80, stdev=7701.19, samples=10 00:36:20.375 iops : min= 180, max= 392, avg=343.40, stdev=60.27, samples=10 00:36:20.375 lat (msec) : 10=85.81%, 20=12.33%, 50=1.80%, 100=0.06% 00:36:20.375 cpu : usr=94.59%, sys=5.19%, ctx=9, majf=0, minf=123 00:36:20.375 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.375 issued rwts: total=1720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:20.375 filename0: (groupid=0, jobs=1): err= 0: pid=3430140: Wed Oct 16 07:19:18 2024 00:36:20.375 read: IOPS=329, BW=41.2MiB/s (43.2MB/s)(208MiB/5044msec) 00:36:20.375 slat (nsec): min=5675, max=32313, avg=6286.73, stdev=1202.19 00:36:20.375 clat (usec): min=4541, max=49339, avg=9059.44, stdev=6410.65 00:36:20.375 lat (usec): min=4547, max=49345, avg=9065.73, stdev=6410.86 00:36:20.375 clat percentiles (usec): 00:36:20.375 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 6980], 00:36:20.375 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8291], 00:36:20.375 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[10683], 00:36:20.375 | 99.00th=[47449], 99.50th=[47973], 99.90th=[49021], 99.95th=[49546], 00:36:20.375 | 99.99th=[49546] 00:36:20.375 bw ( KiB/s): min=22227, max=49152, per=39.38%, avg=42542.70, stdev=7599.69, samples=10 00:36:20.375 iops : min= 173, max= 384, avg=332.30, stdev=59.57, samples=10 00:36:20.375 lat (msec) : 10=88.70%, 20=8.65%, 50=2.64% 00:36:20.375 cpu : usr=94.67%, sys=5.12%, ctx=7, majf=0, minf=97 00:36:20.375 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.375 issued rwts: total=1664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.375 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:20.375 00:36:20.375 Run status group 0 (all jobs): 00:36:20.375 READ: bw=106MiB/s (111MB/s), 21.7MiB/s-42.6MiB/s (22.7MB/s-44.7MB/s), io=532MiB (558MB), run=5043-5045msec 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 bdev_null0 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 [2024-10-16 07:19:19.085175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 bdev_null1 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 bdev_null2 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.375 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:20.376 { 00:36:20.376 "params": { 00:36:20.376 "name": "Nvme$subsystem", 00:36:20.376 "trtype": "$TEST_TRANSPORT", 00:36:20.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:20.376 "adrfam": "ipv4", 00:36:20.376 "trsvcid": "$NVMF_PORT", 00:36:20.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:20.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:20.376 "hdgst": ${hdgst:-false}, 00:36:20.376 "ddgst": ${ddgst:-false} 00:36:20.376 }, 00:36:20.376 "method": "bdev_nvme_attach_controller" 00:36:20.376 } 00:36:20.376 EOF 00:36:20.376 )") 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:20.376 { 00:36:20.376 "params": { 00:36:20.376 "name": "Nvme$subsystem", 00:36:20.376 "trtype": "$TEST_TRANSPORT", 00:36:20.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:20.376 "adrfam": "ipv4", 00:36:20.376 "trsvcid": "$NVMF_PORT", 00:36:20.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:20.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:20.376 "hdgst": ${hdgst:-false}, 00:36:20.376 "ddgst": ${ddgst:-false} 00:36:20.376 }, 00:36:20.376 "method": "bdev_nvme_attach_controller" 00:36:20.376 } 00:36:20.376 EOF 00:36:20.376 )") 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:20.376 { 00:36:20.376 "params": { 00:36:20.376 "name": "Nvme$subsystem", 00:36:20.376 "trtype": "$TEST_TRANSPORT", 00:36:20.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:20.376 "adrfam": "ipv4", 00:36:20.376 "trsvcid": "$NVMF_PORT", 00:36:20.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:20.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:20.376 "hdgst": ${hdgst:-false}, 00:36:20.376 "ddgst": ${ddgst:-false} 00:36:20.376 }, 00:36:20.376 "method": "bdev_nvme_attach_controller" 00:36:20.376 } 00:36:20.376 EOF 00:36:20.376 )") 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:20.376 "params": { 00:36:20.376 "name": "Nvme0", 00:36:20.376 "trtype": "tcp", 00:36:20.376 "traddr": "10.0.0.2", 00:36:20.376 "adrfam": "ipv4", 00:36:20.376 "trsvcid": "4420", 00:36:20.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:20.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:20.376 "hdgst": false, 00:36:20.376 "ddgst": false 00:36:20.376 }, 00:36:20.376 "method": "bdev_nvme_attach_controller" 00:36:20.376 },{ 00:36:20.376 "params": { 00:36:20.376 "name": "Nvme1", 00:36:20.376 "trtype": "tcp", 00:36:20.376 "traddr": "10.0.0.2", 00:36:20.376 "adrfam": "ipv4", 00:36:20.376 "trsvcid": "4420", 00:36:20.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:20.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:20.376 "hdgst": false, 00:36:20.376 "ddgst": false 00:36:20.376 }, 00:36:20.376 "method": "bdev_nvme_attach_controller" 00:36:20.376 },{ 00:36:20.376 "params": { 00:36:20.376 "name": "Nvme2", 00:36:20.376 "trtype": "tcp", 00:36:20.376 "traddr": "10.0.0.2", 00:36:20.376 "adrfam": "ipv4", 00:36:20.376 "trsvcid": "4420", 00:36:20.376 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:20.376 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:20.376 "hdgst": false, 00:36:20.376 "ddgst": false 00:36:20.376 }, 00:36:20.376 "method": "bdev_nvme_attach_controller" 00:36:20.376 }' 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:20.376 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:20.377 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:20.377 07:19:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.377 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:20.377 ... 00:36:20.377 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:20.377 ... 00:36:20.377 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:20.377 ... 00:36:20.377 fio-3.35 00:36:20.377 Starting 24 threads 00:36:32.616 00:36:32.616 filename0: (groupid=0, jobs=1): err= 0: pid=3431647: Wed Oct 16 07:19:30 2024 00:36:32.616 read: IOPS=712, BW=2850KiB/s (2918kB/s)(28.0MiB/10050msec) 00:36:32.616 slat (usec): min=5, max=200, avg=21.54, stdev=22.65 00:36:32.616 clat (usec): min=7008, max=50571, avg=22224.14, stdev=4399.97 00:36:32.616 lat (usec): min=7017, max=50578, avg=22245.68, stdev=4403.25 00:36:32.616 clat percentiles (usec): 00:36:32.616 | 1.00th=[10945], 5.00th=[14091], 10.00th=[16057], 20.00th=[19792], 00:36:32.616 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22938], 60.00th=[22938], 00:36:32.616 | 70.00th=[23462], 80.00th=[23725], 90.00th=[25297], 95.00th=[28705], 00:36:32.616 | 99.00th=[38536], 99.50th=[40109], 99.90th=[44303], 99.95th=[44303], 00:36:32.616 | 99.99th=[50594] 00:36:32.616 bw ( KiB/s): min= 2704, max= 3232, per=4.30%, avg=2863.05, stdev=119.86, samples=20 00:36:32.616 iops : min= 676, max= 808, avg=715.75, stdev=29.95, samples=20 00:36:32.616 lat (msec) : 10=0.45%, 20=20.18%, 50=79.33%, 100=0.04% 00:36:32.616 cpu : usr=98.88%, sys=0.79%, ctx=30, majf=0, minf=58 00:36:32.616 IO depths : 1=2.2%, 2=4.5%, 4=13.3%, 8=69.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:36:32.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 issued rwts: total=7160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.616 filename0: (groupid=0, jobs=1): err= 0: pid=3431648: Wed Oct 16 07:19:30 2024 00:36:32.616 read: IOPS=689, BW=2758KiB/s (2824kB/s)(27.0MiB/10011msec) 00:36:32.616 slat (usec): min=6, max=184, avg=29.76, stdev=22.63 00:36:32.616 clat (usec): min=10019, max=42673, avg=22911.60, stdev=2567.85 00:36:32.616 lat (usec): min=10026, max=42683, avg=22941.37, stdev=2569.88 00:36:32.616 clat percentiles (usec): 00:36:32.616 | 1.00th=[13304], 5.00th=[19792], 10.00th=[21890], 20.00th=[22414], 00:36:32.616 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.616 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25560], 00:36:32.616 | 99.00th=[32113], 99.50th=[36963], 99.90th=[38011], 99.95th=[42730], 00:36:32.616 | 99.99th=[42730] 00:36:32.616 bw ( KiB/s): min= 2560, max= 3152, per=4.14%, avg=2757.05, stdev=132.59, samples=19 00:36:32.616 iops : min= 640, max= 788, avg=689.26, stdev=33.15, samples=19 00:36:32.616 lat (msec) : 20=5.17%, 50=94.83% 00:36:32.616 cpu : usr=99.00%, sys=0.67%, ctx=53, majf=0, minf=43 00:36:32.616 IO depths : 1=5.2%, 2=10.4%, 4=21.6%, 8=55.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:32.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 issued rwts: total=6902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.616 filename0: (groupid=0, jobs=1): err= 0: pid=3431649: Wed Oct 16 07:19:30 2024 00:36:32.616 read: IOPS=685, BW=2743KiB/s (2809kB/s)(26.8MiB/10008msec) 00:36:32.616 slat (usec): min=5, max=161, avg=32.70, stdev=23.05 00:36:32.616 clat (usec): min=10791, max=30051, avg=23018.52, stdev=1210.18 00:36:32.616 lat (usec): min=10803, max=30059, avg=23051.22, stdev=1209.20 00:36:32.616 clat percentiles (usec): 00:36:32.616 | 1.00th=[20317], 5.00th=[21890], 10.00th=[22152], 20.00th=[22414], 00:36:32.616 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.616 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23987], 95.00th=[24511], 00:36:32.616 | 99.00th=[26870], 99.50th=[27919], 99.90th=[30016], 99.95th=[30016], 00:36:32.616 | 99.99th=[30016] 00:36:32.616 bw ( KiB/s): min= 2565, max= 2821, per=4.12%, avg=2742.95, stdev=77.41, samples=19 00:36:32.616 iops : min= 641, max= 705, avg=685.68, stdev=19.37, samples=19 00:36:32.616 lat (msec) : 20=0.93%, 50=99.07% 00:36:32.616 cpu : usr=98.91%, sys=0.78%, ctx=17, majf=0, minf=41 00:36:32.616 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:32.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 issued rwts: total=6864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.616 filename0: (groupid=0, jobs=1): err= 0: pid=3431650: Wed Oct 16 07:19:30 2024 00:36:32.616 read: IOPS=703, BW=2815KiB/s (2883kB/s)(27.5MiB/10006msec) 00:36:32.616 slat (usec): min=5, max=163, avg=23.15, stdev=21.01 00:36:32.616 clat (usec): min=2026, max=44239, avg=22531.00, stdev=3316.52 00:36:32.616 lat (usec): min=2051, max=44248, avg=22554.15, stdev=3317.29 00:36:32.616 clat percentiles (usec): 00:36:32.616 | 1.00th=[ 5080], 5.00th=[17957], 10.00th=[21890], 20.00th=[22414], 00:36:32.616 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.616 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25035], 00:36:32.616 | 99.00th=[26346], 99.50th=[29754], 99.90th=[38011], 99.95th=[44303], 00:36:32.616 | 99.99th=[44303] 00:36:32.616 bw ( KiB/s): min= 2560, max= 3760, per=4.23%, avg=2816.84, stdev=247.73, samples=19 00:36:32.616 iops : min= 640, max= 940, avg=704.21, stdev=61.93, samples=19 00:36:32.616 lat (msec) : 4=0.91%, 10=0.95%, 20=3.95%, 50=94.19% 00:36:32.616 cpu : usr=98.85%, sys=0.83%, ctx=36, majf=0, minf=50 00:36:32.616 IO depths : 1=5.6%, 2=11.3%, 4=23.2%, 8=53.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:32.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 issued rwts: total=7042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.616 filename0: (groupid=0, jobs=1): err= 0: pid=3431651: Wed Oct 16 07:19:30 2024 00:36:32.616 read: IOPS=683, BW=2735KiB/s (2801kB/s)(26.7MiB/10011msec) 00:36:32.616 slat (usec): min=5, max=133, avg=28.50, stdev=19.80 00:36:32.616 clat (usec): min=13941, max=36942, avg=23132.61, stdev=1895.36 00:36:32.616 lat (usec): min=13946, max=36948, avg=23161.12, stdev=1895.15 00:36:32.616 clat percentiles (usec): 00:36:32.616 | 1.00th=[16188], 5.00th=[21890], 10.00th=[22152], 20.00th=[22414], 00:36:32.616 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:32.616 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25297], 00:36:32.616 | 99.00th=[31065], 99.50th=[35390], 99.90th=[35914], 99.95th=[36963], 00:36:32.616 | 99.99th=[36963] 00:36:32.616 bw ( KiB/s): min= 2560, max= 2864, per=4.11%, avg=2735.63, stdev=88.65, samples=19 00:36:32.616 iops : min= 640, max= 716, avg=683.84, stdev=22.17, samples=19 00:36:32.616 lat (msec) : 20=2.53%, 50=97.47% 00:36:32.616 cpu : usr=99.08%, sys=0.61%, ctx=21, majf=0, minf=35 00:36:32.616 IO depths : 1=5.7%, 2=11.5%, 4=23.4%, 8=52.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:32.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 issued rwts: total=6846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.616 filename0: (groupid=0, jobs=1): err= 0: pid=3431652: Wed Oct 16 07:19:30 2024 00:36:32.616 read: IOPS=713, BW=2855KiB/s (2923kB/s)(27.9MiB/10004msec) 00:36:32.616 slat (usec): min=5, max=140, avg=15.69, stdev=15.34 00:36:32.616 clat (usec): min=2746, max=41644, avg=22323.43, stdev=4433.61 00:36:32.616 lat (usec): min=2752, max=41666, avg=22339.12, stdev=4435.68 00:36:32.616 clat percentiles (usec): 00:36:32.616 | 1.00th=[10028], 5.00th=[14353], 10.00th=[16712], 20.00th=[19792], 00:36:32.616 | 30.00th=[22152], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.616 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25560], 95.00th=[29754], 00:36:32.616 | 99.00th=[37487], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:36:32.616 | 99.99th=[41681] 00:36:32.616 bw ( KiB/s): min= 2688, max= 3040, per=4.27%, avg=2842.11, stdev=102.89, samples=19 00:36:32.616 iops : min= 672, max= 760, avg=710.53, stdev=25.72, samples=19 00:36:32.616 lat (msec) : 4=0.08%, 10=0.85%, 20=20.04%, 50=79.02% 00:36:32.616 cpu : usr=99.03%, sys=0.66%, ctx=28, majf=0, minf=29 00:36:32.616 IO depths : 1=1.0%, 2=2.2%, 4=8.0%, 8=75.1%, 16=13.6%, 32=0.0%, >=64=0.0% 00:36:32.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 complete : 0=0.0%, 4=90.1%, 8=6.3%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.616 issued rwts: total=7140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.616 filename0: (groupid=0, jobs=1): err= 0: pid=3431653: Wed Oct 16 07:19:30 2024 00:36:32.616 read: IOPS=689, BW=2760KiB/s (2826kB/s)(27.0MiB/10004msec) 00:36:32.616 slat (usec): min=5, max=528, avg=29.66, stdev=22.13 00:36:32.616 clat (usec): min=5387, max=44606, avg=22929.48, stdev=2993.47 00:36:32.616 lat (usec): min=5396, max=44623, avg=22959.15, stdev=2995.38 00:36:32.616 clat percentiles (usec): 00:36:32.616 | 1.00th=[12780], 5.00th=[18482], 10.00th=[21890], 20.00th=[22414], 00:36:32.616 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.616 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24511], 95.00th=[26346], 00:36:32.616 | 99.00th=[34341], 99.50th=[37487], 99.90th=[44303], 99.95th=[44827], 00:36:32.616 | 99.99th=[44827] 00:36:32.616 bw ( KiB/s): min= 2576, max= 2896, per=4.14%, avg=2753.11, stdev=77.44, samples=19 00:36:32.616 iops : min= 644, max= 724, avg=688.16, stdev=19.36, samples=19 00:36:32.616 lat (msec) : 10=0.23%, 20=6.10%, 50=93.67% 00:36:32.616 cpu : usr=99.03%, sys=0.64%, ctx=19, majf=0, minf=34 00:36:32.616 IO depths : 1=3.9%, 2=8.5%, 4=19.5%, 8=59.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:32.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 complete : 0=0.0%, 4=92.7%, 8=2.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 issued rwts: total=6902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.617 filename0: (groupid=0, jobs=1): err= 0: pid=3431654: Wed Oct 16 07:19:30 2024 00:36:32.617 read: IOPS=692, BW=2770KiB/s (2837kB/s)(27.1MiB/10006msec) 00:36:32.617 slat (usec): min=5, max=533, avg=26.13, stdev=23.22 00:36:32.617 clat (usec): min=6287, max=39216, avg=22884.74, stdev=2435.52 00:36:32.617 lat (usec): min=6296, max=39227, avg=22910.87, stdev=2437.08 00:36:32.617 clat percentiles (usec): 00:36:32.617 | 1.00th=[13698], 5.00th=[20055], 10.00th=[21890], 20.00th=[22414], 00:36:32.617 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.617 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25297], 00:36:32.617 | 99.00th=[31065], 99.50th=[34866], 99.90th=[39060], 99.95th=[39060], 00:36:32.617 | 99.99th=[39060] 00:36:32.617 bw ( KiB/s): min= 2560, max= 2992, per=4.16%, avg=2769.68, stdev=125.97, samples=19 00:36:32.617 iops : min= 640, max= 748, avg=692.42, stdev=31.49, samples=19 00:36:32.617 lat (msec) : 10=0.16%, 20=4.91%, 50=94.94% 00:36:32.617 cpu : usr=98.69%, sys=0.91%, ctx=86, majf=0, minf=56 00:36:32.617 IO depths : 1=5.6%, 2=11.3%, 4=23.2%, 8=53.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:32.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 issued rwts: total=6930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.617 filename1: (groupid=0, jobs=1): err= 0: pid=3431655: Wed Oct 16 07:19:30 2024 00:36:32.617 read: IOPS=680, BW=2722KiB/s (2787kB/s)(26.6MiB/10003msec) 00:36:32.617 slat (usec): min=5, max=105, avg=18.40, stdev=16.31 00:36:32.617 clat (usec): min=4227, max=58047, avg=23422.44, stdev=3511.54 00:36:32.617 lat (usec): min=4235, max=58067, avg=23440.83, stdev=3512.67 00:36:32.617 clat percentiles (usec): 00:36:32.617 | 1.00th=[12780], 5.00th=[18482], 10.00th=[21365], 20.00th=[22414], 00:36:32.617 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:36:32.617 | 70.00th=[23725], 80.00th=[23987], 90.00th=[26346], 95.00th=[28705], 00:36:32.617 | 99.00th=[38011], 99.50th=[38536], 99.90th=[43779], 99.95th=[57934], 00:36:32.617 | 99.99th=[57934] 00:36:32.617 bw ( KiB/s): min= 2488, max= 2912, per=4.07%, avg=2711.00, stdev=113.64, samples=19 00:36:32.617 iops : min= 622, max= 728, avg=677.74, stdev=28.42, samples=19 00:36:32.617 lat (msec) : 10=0.35%, 20=6.20%, 50=93.37%, 100=0.07% 00:36:32.617 cpu : usr=98.93%, sys=0.74%, ctx=22, majf=0, minf=43 00:36:32.617 IO depths : 1=0.4%, 2=0.9%, 4=5.1%, 8=77.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:36:32.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 complete : 0=0.0%, 4=89.9%, 8=8.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 issued rwts: total=6806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.617 filename1: (groupid=0, jobs=1): err= 0: pid=3431656: Wed Oct 16 07:19:30 2024 00:36:32.617 read: IOPS=693, BW=2774KiB/s (2841kB/s)(27.1MiB/10003msec) 00:36:32.617 slat (usec): min=5, max=111, avg=19.91, stdev=16.68 00:36:32.617 clat (usec): min=7490, max=43986, avg=22945.18, stdev=3454.67 00:36:32.617 lat (usec): min=7505, max=44007, avg=22965.10, stdev=3456.57 00:36:32.617 clat percentiles (usec): 00:36:32.617 | 1.00th=[13566], 5.00th=[16319], 10.00th=[19268], 20.00th=[22152], 00:36:32.617 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:32.617 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25297], 95.00th=[28443], 00:36:32.617 | 99.00th=[34341], 99.50th=[39060], 99.90th=[43779], 99.95th=[43779], 00:36:32.617 | 99.99th=[43779] 00:36:32.617 bw ( KiB/s): min= 2560, max= 3024, per=4.15%, avg=2764.89, stdev=127.49, samples=19 00:36:32.617 iops : min= 640, max= 756, avg=691.21, stdev=31.89, samples=19 00:36:32.617 lat (msec) : 10=0.23%, 20=10.68%, 50=89.09% 00:36:32.617 cpu : usr=98.86%, sys=0.77%, ctx=59, majf=0, minf=56 00:36:32.617 IO depths : 1=1.3%, 2=2.7%, 4=9.5%, 8=72.6%, 16=14.0%, 32=0.0%, >=64=0.0% 00:36:32.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 complete : 0=0.0%, 4=90.9%, 8=6.1%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 issued rwts: total=6938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.617 filename1: (groupid=0, jobs=1): err= 0: pid=3431657: Wed Oct 16 07:19:30 2024 00:36:32.617 read: IOPS=694, BW=2779KiB/s (2845kB/s)(27.1MiB/10003msec) 00:36:32.617 slat (usec): min=5, max=126, avg=19.28, stdev=18.01 00:36:32.617 clat (usec): min=6319, max=54978, avg=22908.59, stdev=3753.78 00:36:32.617 lat (usec): min=6325, max=54997, avg=22927.87, stdev=3755.34 00:36:32.617 clat percentiles (usec): 00:36:32.617 | 1.00th=[11469], 5.00th=[15795], 10.00th=[19006], 20.00th=[22152], 00:36:32.617 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.617 | 70.00th=[23462], 80.00th=[23987], 90.00th=[26084], 95.00th=[28967], 00:36:32.617 | 99.00th=[35390], 99.50th=[37487], 99.90th=[46924], 99.95th=[46924], 00:36:32.617 | 99.99th=[54789] 00:36:32.617 bw ( KiB/s): min= 2512, max= 2880, per=4.15%, avg=2765.05, stdev=84.25, samples=19 00:36:32.617 iops : min= 628, max= 720, avg=691.26, stdev=21.06, samples=19 00:36:32.617 lat (msec) : 10=0.26%, 20=11.50%, 50=88.23%, 100=0.01% 00:36:32.617 cpu : usr=98.89%, sys=0.78%, ctx=22, majf=0, minf=38 00:36:32.617 IO depths : 1=1.3%, 2=2.7%, 4=8.1%, 8=74.1%, 16=13.9%, 32=0.0%, >=64=0.0% 00:36:32.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 complete : 0=0.0%, 4=89.8%, 8=7.1%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 issued rwts: total=6949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.617 filename1: (groupid=0, jobs=1): err= 0: pid=3431658: Wed Oct 16 07:19:30 2024 00:36:32.617 read: IOPS=698, BW=2795KiB/s (2862kB/s)(27.3MiB/10008msec) 00:36:32.617 slat (usec): min=5, max=113, avg=20.08, stdev=16.32 00:36:32.617 clat (usec): min=8158, max=41532, avg=22735.87, stdev=2584.30 00:36:32.617 lat (usec): min=8166, max=41538, avg=22755.96, stdev=2585.26 00:36:32.617 clat percentiles (usec): 00:36:32.617 | 1.00th=[13698], 5.00th=[17433], 10.00th=[21627], 20.00th=[22414], 00:36:32.617 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.617 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25035], 00:36:32.617 | 99.00th=[29754], 99.50th=[32900], 99.90th=[40109], 99.95th=[41681], 00:36:32.617 | 99.99th=[41681] 00:36:32.617 bw ( KiB/s): min= 2688, max= 3168, per=4.20%, avg=2796.89, stdev=128.94, samples=19 00:36:32.617 iops : min= 672, max= 792, avg=699.16, stdev=32.18, samples=19 00:36:32.617 lat (msec) : 10=0.09%, 20=8.37%, 50=91.55% 00:36:32.617 cpu : usr=98.78%, sys=0.88%, ctx=15, majf=0, minf=36 00:36:32.617 IO depths : 1=4.3%, 2=9.3%, 4=21.0%, 8=57.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:36:32.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 issued rwts: total=6992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.617 filename1: (groupid=0, jobs=1): err= 0: pid=3431659: Wed Oct 16 07:19:30 2024 00:36:32.617 read: IOPS=712, BW=2849KiB/s (2917kB/s)(27.8MiB/10002msec) 00:36:32.617 slat (usec): min=5, max=110, avg=16.33, stdev=14.57 00:36:32.617 clat (usec): min=2025, max=43178, avg=22342.93, stdev=3828.42 00:36:32.617 lat (usec): min=2036, max=43189, avg=22359.26, stdev=3828.85 00:36:32.617 clat percentiles (usec): 00:36:32.617 | 1.00th=[ 3687], 5.00th=[15008], 10.00th=[19268], 20.00th=[22152], 00:36:32.617 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.617 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25822], 00:36:32.617 | 99.00th=[31327], 99.50th=[32637], 99.90th=[39060], 99.95th=[43254], 00:36:32.617 | 99.99th=[43254] 00:36:32.617 bw ( KiB/s): min= 2688, max= 3704, per=4.29%, avg=2857.68, stdev=230.56, samples=19 00:36:32.617 iops : min= 672, max= 926, avg=714.42, stdev=57.64, samples=19 00:36:32.617 lat (msec) : 4=1.10%, 10=0.69%, 20=9.86%, 50=88.36% 00:36:32.617 cpu : usr=98.73%, sys=0.95%, ctx=19, majf=0, minf=80 00:36:32.617 IO depths : 1=4.7%, 2=9.4%, 4=20.3%, 8=57.6%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:32.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 issued rwts: total=7123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.617 filename1: (groupid=0, jobs=1): err= 0: pid=3431660: Wed Oct 16 07:19:30 2024 00:36:32.617 read: IOPS=706, BW=2826KiB/s (2893kB/s)(27.6MiB/10014msec) 00:36:32.617 slat (usec): min=5, max=174, avg=23.75, stdev=20.64 00:36:32.617 clat (usec): min=6990, max=42521, avg=22455.99, stdev=3687.36 00:36:32.617 lat (usec): min=7000, max=42531, avg=22479.74, stdev=3690.29 00:36:32.617 clat percentiles (usec): 00:36:32.617 | 1.00th=[10814], 5.00th=[14615], 10.00th=[17957], 20.00th=[21890], 00:36:32.617 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.617 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24511], 95.00th=[26346], 00:36:32.617 | 99.00th=[35390], 99.50th=[40109], 99.90th=[41681], 99.95th=[42730], 00:36:32.617 | 99.99th=[42730] 00:36:32.617 bw ( KiB/s): min= 2656, max= 3126, per=4.25%, avg=2826.42, stdev=121.17, samples=19 00:36:32.617 iops : min= 664, max= 781, avg=706.58, stdev=30.22, samples=19 00:36:32.617 lat (msec) : 10=0.81%, 20=13.06%, 50=86.13% 00:36:32.617 cpu : usr=98.95%, sys=0.72%, ctx=21, majf=0, minf=48 00:36:32.617 IO depths : 1=4.1%, 2=8.4%, 4=19.0%, 8=59.9%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:32.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.617 issued rwts: total=7074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.617 filename1: (groupid=0, jobs=1): err= 0: pid=3431661: Wed Oct 16 07:19:30 2024 00:36:32.617 read: IOPS=695, BW=2782KiB/s (2849kB/s)(27.2MiB/10009msec) 00:36:32.617 slat (usec): min=5, max=127, avg=28.36, stdev=20.73 00:36:32.617 clat (usec): min=8992, max=40812, avg=22743.71, stdev=2660.99 00:36:32.617 lat (usec): min=8999, max=40834, avg=22772.07, stdev=2663.45 00:36:32.617 clat percentiles (usec): 00:36:32.617 | 1.00th=[13435], 5.00th=[17695], 10.00th=[21627], 20.00th=[22152], 00:36:32.617 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[22938], 00:36:32.617 | 70.00th=[23200], 80.00th=[23725], 90.00th=[24249], 95.00th=[25560], 00:36:32.617 | 99.00th=[33817], 99.50th=[34866], 99.90th=[40109], 99.95th=[40633], 00:36:32.617 | 99.99th=[40633] 00:36:32.617 bw ( KiB/s): min= 2528, max= 3120, per=4.18%, avg=2783.16, stdev=127.30, samples=19 00:36:32.617 iops : min= 632, max= 780, avg=695.79, stdev=31.83, samples=19 00:36:32.617 lat (msec) : 10=0.16%, 20=8.42%, 50=91.42% 00:36:32.617 cpu : usr=98.82%, sys=0.84%, ctx=36, majf=0, minf=47 00:36:32.617 IO depths : 1=4.6%, 2=9.2%, 4=19.9%, 8=57.9%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:32.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 issued rwts: total=6962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.618 filename1: (groupid=0, jobs=1): err= 0: pid=3431662: Wed Oct 16 07:19:30 2024 00:36:32.618 read: IOPS=688, BW=2755KiB/s (2821kB/s)(27.0MiB/10048msec) 00:36:32.618 slat (usec): min=5, max=112, avg=27.73, stdev=18.66 00:36:32.618 clat (usec): min=7493, max=64333, avg=22974.56, stdev=2847.42 00:36:32.618 lat (usec): min=7501, max=64342, avg=23002.29, stdev=2848.69 00:36:32.618 clat percentiles (usec): 00:36:32.618 | 1.00th=[12780], 5.00th=[19006], 10.00th=[21890], 20.00th=[22414], 00:36:32.618 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.618 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25822], 00:36:32.618 | 99.00th=[31851], 99.50th=[35914], 99.90th=[50594], 99.95th=[64226], 00:36:32.618 | 99.99th=[64226] 00:36:32.618 bw ( KiB/s): min= 2560, max= 2912, per=4.14%, avg=2755.05, stdev=95.00, samples=19 00:36:32.618 iops : min= 640, max= 728, avg=688.74, stdev=23.75, samples=19 00:36:32.618 lat (msec) : 10=0.04%, 20=5.98%, 50=93.80%, 100=0.17% 00:36:32.618 cpu : usr=98.81%, sys=0.86%, ctx=18, majf=0, minf=47 00:36:32.618 IO depths : 1=5.2%, 2=10.4%, 4=21.8%, 8=55.0%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:32.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 issued rwts: total=6920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.618 filename2: (groupid=0, jobs=1): err= 0: pid=3431663: Wed Oct 16 07:19:30 2024 00:36:32.618 read: IOPS=691, BW=2767KiB/s (2833kB/s)(27.0MiB/10010msec) 00:36:32.618 slat (usec): min=5, max=108, avg=21.84, stdev=18.35 00:36:32.618 clat (usec): min=8376, max=40104, avg=22946.31, stdev=2321.10 00:36:32.618 lat (usec): min=8383, max=40119, avg=22968.15, stdev=2321.98 00:36:32.618 clat percentiles (usec): 00:36:32.618 | 1.00th=[13566], 5.00th=[19792], 10.00th=[22152], 20.00th=[22414], 00:36:32.618 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:32.618 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[25035], 00:36:32.618 | 99.00th=[31589], 99.50th=[34341], 99.90th=[38536], 99.95th=[40109], 00:36:32.618 | 99.99th=[40109] 00:36:32.618 bw ( KiB/s): min= 2688, max= 2869, per=4.16%, avg=2768.21, stdev=65.48, samples=19 00:36:32.618 iops : min= 672, max= 717, avg=692.00, stdev=16.32, samples=19 00:36:32.618 lat (msec) : 10=0.09%, 20=4.95%, 50=94.96% 00:36:32.618 cpu : usr=98.82%, sys=0.86%, ctx=15, majf=0, minf=38 00:36:32.618 IO depths : 1=5.4%, 2=11.2%, 4=23.4%, 8=52.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:32.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 issued rwts: total=6924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.618 filename2: (groupid=0, jobs=1): err= 0: pid=3431664: Wed Oct 16 07:19:30 2024 00:36:32.618 read: IOPS=701, BW=2805KiB/s (2872kB/s)(27.4MiB/10008msec) 00:36:32.618 slat (usec): min=5, max=105, avg=26.26, stdev=18.91 00:36:32.618 clat (usec): min=9063, max=42345, avg=22586.76, stdev=2840.92 00:36:32.618 lat (usec): min=9070, max=42364, avg=22613.02, stdev=2844.06 00:36:32.618 clat percentiles (usec): 00:36:32.618 | 1.00th=[13304], 5.00th=[16057], 10.00th=[20055], 20.00th=[22152], 00:36:32.618 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.618 | 70.00th=[23200], 80.00th=[23725], 90.00th=[24249], 95.00th=[25560], 00:36:32.618 | 99.00th=[32375], 99.50th=[34341], 99.90th=[40633], 99.95th=[42206], 00:36:32.618 | 99.99th=[42206] 00:36:32.618 bw ( KiB/s): min= 2560, max= 3280, per=4.21%, avg=2804.21, stdev=184.28, samples=19 00:36:32.618 iops : min= 640, max= 820, avg=701.05, stdev=46.07, samples=19 00:36:32.618 lat (msec) : 10=0.06%, 20=9.66%, 50=90.28% 00:36:32.618 cpu : usr=98.82%, sys=0.85%, ctx=17, majf=0, minf=36 00:36:32.618 IO depths : 1=5.0%, 2=10.0%, 4=21.0%, 8=56.2%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:32.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 complete : 0=0.0%, 4=93.0%, 8=1.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 issued rwts: total=7018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.618 filename2: (groupid=0, jobs=1): err= 0: pid=3431665: Wed Oct 16 07:19:30 2024 00:36:32.618 read: IOPS=696, BW=2785KiB/s (2852kB/s)(27.2MiB/10004msec) 00:36:32.618 slat (usec): min=5, max=126, avg=25.22, stdev=17.87 00:36:32.618 clat (usec): min=7371, max=41079, avg=22762.57, stdev=3014.49 00:36:32.618 lat (usec): min=7377, max=41094, avg=22787.78, stdev=3016.35 00:36:32.618 clat percentiles (usec): 00:36:32.618 | 1.00th=[12256], 5.00th=[16188], 10.00th=[21627], 20.00th=[22414], 00:36:32.618 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.618 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24511], 95.00th=[26084], 00:36:32.618 | 99.00th=[32637], 99.50th=[34866], 99.90th=[41157], 99.95th=[41157], 00:36:32.618 | 99.99th=[41157] 00:36:32.618 bw ( KiB/s): min= 2560, max= 3136, per=4.17%, avg=2778.11, stdev=129.68, samples=19 00:36:32.618 iops : min= 640, max= 784, avg=694.53, stdev=32.42, samples=19 00:36:32.618 lat (msec) : 10=0.46%, 20=7.02%, 50=92.52% 00:36:32.618 cpu : usr=99.00%, sys=0.66%, ctx=19, majf=0, minf=30 00:36:32.618 IO depths : 1=4.8%, 2=9.5%, 4=20.1%, 8=57.4%, 16=8.2%, 32=0.0%, >=64=0.0% 00:36:32.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 complete : 0=0.0%, 4=92.8%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 issued rwts: total=6966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.618 filename2: (groupid=0, jobs=1): err= 0: pid=3431666: Wed Oct 16 07:19:30 2024 00:36:32.618 read: IOPS=687, BW=2751KiB/s (2817kB/s)(26.9MiB/10004msec) 00:36:32.618 slat (usec): min=5, max=123, avg=19.86, stdev=17.81 00:36:32.618 clat (usec): min=5337, max=48872, avg=23138.68, stdev=3839.05 00:36:32.618 lat (usec): min=5345, max=48890, avg=23158.54, stdev=3839.11 00:36:32.618 clat percentiles (usec): 00:36:32.618 | 1.00th=[12387], 5.00th=[16450], 10.00th=[19530], 20.00th=[22152], 00:36:32.618 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.618 | 70.00th=[23725], 80.00th=[24249], 90.00th=[26870], 95.00th=[30016], 00:36:32.618 | 99.00th=[36439], 99.50th=[38536], 99.90th=[40633], 99.95th=[40633], 00:36:32.618 | 99.99th=[49021] 00:36:32.618 bw ( KiB/s): min= 2536, max= 2976, per=4.12%, avg=2741.89, stdev=95.33, samples=19 00:36:32.618 iops : min= 634, max= 744, avg=685.47, stdev=23.83, samples=19 00:36:32.618 lat (msec) : 10=0.54%, 20=10.76%, 50=88.71% 00:36:32.618 cpu : usr=99.00%, sys=0.66%, ctx=16, majf=0, minf=37 00:36:32.618 IO depths : 1=1.3%, 2=2.7%, 4=8.3%, 8=74.0%, 16=13.7%, 32=0.0%, >=64=0.0% 00:36:32.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 complete : 0=0.0%, 4=90.2%, 8=6.7%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 issued rwts: total=6880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.618 filename2: (groupid=0, jobs=1): err= 0: pid=3431667: Wed Oct 16 07:19:30 2024 00:36:32.618 read: IOPS=692, BW=2772KiB/s (2838kB/s)(27.1MiB/10003msec) 00:36:32.618 slat (usec): min=5, max=110, avg=17.63, stdev=16.65 00:36:32.618 clat (usec): min=3943, max=57857, avg=22975.38, stdev=4720.27 00:36:32.618 lat (usec): min=3950, max=57876, avg=22993.01, stdev=4721.64 00:36:32.618 clat percentiles (usec): 00:36:32.618 | 1.00th=[11076], 5.00th=[14615], 10.00th=[16909], 20.00th=[21365], 00:36:32.618 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:36:32.618 | 70.00th=[23725], 80.00th=[24249], 90.00th=[27919], 95.00th=[31851], 00:36:32.618 | 99.00th=[38536], 99.50th=[39584], 99.90th=[43779], 99.95th=[57934], 00:36:32.618 | 99.99th=[57934] 00:36:32.618 bw ( KiB/s): min= 2560, max= 2928, per=4.16%, avg=2769.11, stdev=100.10, samples=19 00:36:32.618 iops : min= 640, max= 732, avg=692.26, stdev=25.03, samples=19 00:36:32.618 lat (msec) : 4=0.06%, 10=0.61%, 20=15.15%, 50=84.12%, 100=0.07% 00:36:32.618 cpu : usr=98.77%, sys=0.89%, ctx=17, majf=0, minf=31 00:36:32.618 IO depths : 1=1.4%, 2=2.8%, 4=9.1%, 8=73.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:36:32.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 complete : 0=0.0%, 4=90.1%, 8=6.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 issued rwts: total=6932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.618 filename2: (groupid=0, jobs=1): err= 0: pid=3431668: Wed Oct 16 07:19:30 2024 00:36:32.618 read: IOPS=703, BW=2815KiB/s (2883kB/s)(27.5MiB/10014msec) 00:36:32.618 slat (usec): min=5, max=111, avg=11.53, stdev= 9.43 00:36:32.618 clat (usec): min=4801, max=42159, avg=22636.71, stdev=2842.93 00:36:32.618 lat (usec): min=4808, max=42168, avg=22648.24, stdev=2842.64 00:36:32.618 clat percentiles (usec): 00:36:32.618 | 1.00th=[10683], 5.00th=[17957], 10.00th=[21627], 20.00th=[22414], 00:36:32.618 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:32.618 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25035], 00:36:32.618 | 99.00th=[27657], 99.50th=[31589], 99.90th=[42206], 99.95th=[42206], 00:36:32.618 | 99.99th=[42206] 00:36:32.618 bw ( KiB/s): min= 2640, max= 3296, per=4.23%, avg=2812.63, stdev=155.09, samples=19 00:36:32.618 iops : min= 660, max= 824, avg=703.16, stdev=38.77, samples=19 00:36:32.618 lat (msec) : 10=0.82%, 20=5.76%, 50=93.42% 00:36:32.618 cpu : usr=98.75%, sys=0.87%, ctx=54, majf=0, minf=41 00:36:32.618 IO depths : 1=5.6%, 2=11.2%, 4=23.0%, 8=53.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:32.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.618 issued rwts: total=7048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.618 filename2: (groupid=0, jobs=1): err= 0: pid=3431669: Wed Oct 16 07:19:30 2024 00:36:32.618 read: IOPS=687, BW=2751KiB/s (2817kB/s)(26.9MiB/10002msec) 00:36:32.618 slat (usec): min=5, max=132, avg=16.41, stdev=17.44 00:36:32.618 clat (usec): min=7582, max=29885, avg=23124.68, stdev=1409.91 00:36:32.618 lat (usec): min=7592, max=29893, avg=23141.09, stdev=1408.89 00:36:32.618 clat percentiles (usec): 00:36:32.618 | 1.00th=[17171], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:36:32.618 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:36:32.618 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25035], 00:36:32.618 | 99.00th=[26608], 99.50th=[26870], 99.90th=[29754], 99.95th=[29754], 00:36:32.618 | 99.99th=[30016] 00:36:32.618 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2755.37, stdev=78.31, samples=19 00:36:32.618 iops : min= 640, max= 704, avg=688.84, stdev=19.58, samples=19 00:36:32.619 lat (msec) : 10=0.09%, 20=1.19%, 50=98.72% 00:36:32.619 cpu : usr=98.95%, sys=0.73%, ctx=20, majf=0, minf=39 00:36:32.619 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:32.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.619 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.619 issued rwts: total=6880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.619 filename2: (groupid=0, jobs=1): err= 0: pid=3431670: Wed Oct 16 07:19:30 2024 00:36:32.619 read: IOPS=701, BW=2805KiB/s (2872kB/s)(27.4MiB/10001msec) 00:36:32.619 slat (usec): min=5, max=524, avg=25.71, stdev=19.33 00:36:32.619 clat (usec): min=3943, max=37707, avg=22611.85, stdev=2721.03 00:36:32.619 lat (usec): min=3949, max=37713, avg=22637.56, stdev=2724.40 00:36:32.619 clat percentiles (usec): 00:36:32.619 | 1.00th=[13435], 5.00th=[16188], 10.00th=[20579], 20.00th=[22152], 00:36:32.619 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[22938], 00:36:32.619 | 70.00th=[23200], 80.00th=[23725], 90.00th=[24249], 95.00th=[25297], 00:36:32.619 | 99.00th=[31327], 99.50th=[32375], 99.90th=[36439], 99.95th=[37487], 00:36:32.619 | 99.99th=[37487] 00:36:32.619 bw ( KiB/s): min= 2688, max= 3152, per=4.22%, avg=2810.95, stdev=109.31, samples=19 00:36:32.619 iops : min= 672, max= 788, avg=702.74, stdev=27.33, samples=19 00:36:32.619 lat (msec) : 4=0.09%, 20=9.17%, 50=90.74% 00:36:32.619 cpu : usr=98.84%, sys=0.84%, ctx=15, majf=0, minf=43 00:36:32.619 IO depths : 1=4.4%, 2=9.1%, 4=19.7%, 8=58.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:32.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.619 complete : 0=0.0%, 4=92.7%, 8=1.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.619 issued rwts: total=7012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:32.619 00:36:32.619 Run status group 0 (all jobs): 00:36:32.619 READ: bw=65.0MiB/s (68.1MB/s), 2722KiB/s-2855KiB/s (2787kB/s-2923kB/s), io=653MiB (685MB), run=10001-10050msec 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 bdev_null0 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 [2024-10-16 07:19:30.955855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 bdev_null1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.619 07:19:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:32.619 { 00:36:32.619 "params": { 00:36:32.619 "name": "Nvme$subsystem", 00:36:32.619 "trtype": "$TEST_TRANSPORT", 00:36:32.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.619 "adrfam": "ipv4", 00:36:32.619 "trsvcid": "$NVMF_PORT", 00:36:32.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.619 "hdgst": ${hdgst:-false}, 00:36:32.619 "ddgst": ${ddgst:-false} 00:36:32.619 }, 00:36:32.619 "method": "bdev_nvme_attach_controller" 00:36:32.619 } 00:36:32.619 EOF 00:36:32.619 )") 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:32.619 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:32.620 { 00:36:32.620 "params": { 00:36:32.620 "name": "Nvme$subsystem", 00:36:32.620 "trtype": "$TEST_TRANSPORT", 00:36:32.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.620 "adrfam": "ipv4", 00:36:32.620 "trsvcid": "$NVMF_PORT", 00:36:32.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.620 "hdgst": ${hdgst:-false}, 00:36:32.620 "ddgst": ${ddgst:-false} 00:36:32.620 }, 00:36:32.620 "method": "bdev_nvme_attach_controller" 00:36:32.620 } 00:36:32.620 EOF 00:36:32.620 )") 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:32.620 "params": { 00:36:32.620 "name": "Nvme0", 00:36:32.620 "trtype": "tcp", 00:36:32.620 "traddr": "10.0.0.2", 00:36:32.620 "adrfam": "ipv4", 00:36:32.620 "trsvcid": "4420", 00:36:32.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:32.620 "hdgst": false, 00:36:32.620 "ddgst": false 00:36:32.620 }, 00:36:32.620 "method": "bdev_nvme_attach_controller" 00:36:32.620 },{ 00:36:32.620 "params": { 00:36:32.620 "name": "Nvme1", 00:36:32.620 "trtype": "tcp", 00:36:32.620 "traddr": "10.0.0.2", 00:36:32.620 "adrfam": "ipv4", 00:36:32.620 "trsvcid": "4420", 00:36:32.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:32.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:32.620 "hdgst": false, 00:36:32.620 "ddgst": false 00:36:32.620 }, 00:36:32.620 "method": "bdev_nvme_attach_controller" 00:36:32.620 }' 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:32.620 07:19:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.620 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:32.620 ... 00:36:32.620 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:32.620 ... 00:36:32.620 fio-3.35 00:36:32.620 Starting 4 threads 00:36:37.908 00:36:37.908 filename0: (groupid=0, jobs=1): err= 0: pid=3433951: Wed Oct 16 07:19:37 2024 00:36:37.908 read: IOPS=3091, BW=24.1MiB/s (25.3MB/s)(121MiB/5002msec) 00:36:37.908 slat (nsec): min=5675, max=56173, avg=8183.32, stdev=2570.55 00:36:37.908 clat (usec): min=782, max=4421, avg=2566.15, stdev=363.96 00:36:37.908 lat (usec): min=805, max=4430, avg=2574.34, stdev=363.85 00:36:37.908 clat percentiles (usec): 00:36:37.908 | 1.00th=[ 1663], 5.00th=[ 1991], 10.00th=[ 2147], 20.00th=[ 2278], 00:36:37.908 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2671], 00:36:37.908 | 70.00th=[ 2671], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 3228], 00:36:37.908 | 99.00th=[ 3589], 99.50th=[ 3654], 99.90th=[ 3949], 99.95th=[ 4015], 00:36:37.908 | 99.99th=[ 4424] 00:36:37.908 bw ( KiB/s): min=24384, max=24960, per=25.95%, avg=24736.00, stdev=202.23, samples=9 00:36:37.908 iops : min= 3048, max= 3120, avg=3092.00, stdev=25.28, samples=9 00:36:37.908 lat (usec) : 1000=0.01% 00:36:37.908 lat (msec) : 2=5.77%, 4=94.17%, 10=0.06% 00:36:37.908 cpu : usr=96.56%, sys=3.14%, ctx=8, majf=0, minf=30 00:36:37.908 IO depths : 1=0.1%, 2=0.8%, 4=69.5%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.908 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.908 issued rwts: total=15462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.908 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:37.908 filename0: (groupid=0, jobs=1): err= 0: pid=3433952: Wed Oct 16 07:19:37 2024 00:36:37.908 read: IOPS=2930, BW=22.9MiB/s (24.0MB/s)(115MiB/5002msec) 00:36:37.908 slat (nsec): min=5650, max=53023, avg=6466.86, stdev=1851.61 00:36:37.908 clat (usec): min=1256, max=4846, avg=2711.45, stdev=309.16 00:36:37.908 lat (usec): min=1274, max=4852, avg=2717.92, stdev=309.16 00:36:37.908 clat percentiles (usec): 00:36:37.908 | 1.00th=[ 2089], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2507], 00:36:37.908 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:36:37.908 | 70.00th=[ 2769], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 3195], 00:36:37.908 | 99.00th=[ 3982], 99.50th=[ 4178], 99.90th=[ 4490], 99.95th=[ 4555], 00:36:37.908 | 99.99th=[ 4817] 00:36:37.908 bw ( KiB/s): min=23264, max=23632, per=24.60%, avg=23443.56, stdev=111.65, samples=9 00:36:37.908 iops : min= 2908, max= 2954, avg=2930.44, stdev=13.96, samples=9 00:36:37.908 lat (msec) : 2=0.76%, 4=98.33%, 10=0.91% 00:36:37.908 cpu : usr=96.70%, sys=3.06%, ctx=7, majf=0, minf=42 00:36:37.908 IO depths : 1=0.1%, 2=0.3%, 4=73.3%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.908 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.908 issued rwts: total=14659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.908 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:37.908 filename1: (groupid=0, jobs=1): err= 0: pid=3433953: Wed Oct 16 07:19:37 2024 00:36:37.908 read: IOPS=2996, BW=23.4MiB/s (24.5MB/s)(117MiB/5001msec) 00:36:37.908 slat (nsec): min=8229, max=59964, avg=9190.64, stdev=2493.90 00:36:37.908 clat (usec): min=906, max=5439, avg=2645.65, stdev=343.83 00:36:37.908 lat (usec): min=915, max=5471, avg=2654.84, stdev=343.87 00:36:37.908 clat percentiles (usec): 00:36:37.908 | 1.00th=[ 1893], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:36:37.908 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2671], 00:36:37.908 | 70.00th=[ 2704], 80.00th=[ 2868], 90.00th=[ 3032], 95.00th=[ 3294], 00:36:37.908 | 99.00th=[ 3720], 99.50th=[ 3916], 99.90th=[ 4359], 99.95th=[ 5014], 00:36:37.908 | 99.99th=[ 5407] 00:36:37.908 bw ( KiB/s): min=23552, max=24240, per=25.15%, avg=23973.33, stdev=207.38, samples=9 00:36:37.908 iops : min= 2944, max= 3030, avg=2996.67, stdev=25.92, samples=9 00:36:37.908 lat (usec) : 1000=0.01% 00:36:37.908 lat (msec) : 2=2.01%, 4=97.64%, 10=0.33% 00:36:37.908 cpu : usr=97.34%, sys=2.38%, ctx=8, majf=0, minf=61 00:36:37.908 IO depths : 1=0.1%, 2=0.6%, 4=70.8%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.908 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.908 issued rwts: total=14984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.908 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:37.908 filename1: (groupid=0, jobs=1): err= 0: pid=3433954: Wed Oct 16 07:19:37 2024 00:36:37.908 read: IOPS=2897, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:36:37.908 slat (nsec): min=8232, max=60404, avg=9372.10, stdev=2811.99 00:36:37.908 clat (usec): min=1331, max=5259, avg=2735.63, stdev=311.03 00:36:37.908 lat (usec): min=1345, max=5287, avg=2745.00, stdev=311.13 00:36:37.908 clat percentiles (usec): 00:36:37.908 | 1.00th=[ 2008], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:36:37.908 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:36:37.908 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 3032], 95.00th=[ 3261], 00:36:37.908 | 99.00th=[ 3916], 99.50th=[ 4080], 99.90th=[ 4555], 99.95th=[ 4686], 00:36:37.908 | 99.99th=[ 5276] 00:36:37.908 bw ( KiB/s): min=22800, max=23344, per=24.30%, avg=23159.11, stdev=170.29, samples=9 00:36:37.908 iops : min= 2850, max= 2918, avg=2894.89, stdev=21.29, samples=9 00:36:37.908 lat (msec) : 2=0.93%, 4=98.35%, 10=0.72% 00:36:37.908 cpu : usr=89.96%, sys=6.02%, ctx=328, majf=0, minf=41 00:36:37.908 IO depths : 1=0.2%, 2=0.6%, 4=72.3%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.908 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.908 issued rwts: total=14488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.909 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:37.909 00:36:37.909 Run status group 0 (all jobs): 00:36:37.909 READ: bw=93.1MiB/s (97.6MB/s), 22.6MiB/s-24.1MiB/s (23.7MB/s-25.3MB/s), io=466MiB (488MB), run=5001-5002msec 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.909 00:36:37.909 real 0m24.613s 00:36:37.909 user 5m21.461s 00:36:37.909 sys 0m4.522s 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:37.909 07:19:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:37.909 ************************************ 00:36:37.909 END TEST fio_dif_rand_params 00:36:37.909 ************************************ 00:36:38.170 07:19:37 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:38.170 07:19:37 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:38.170 07:19:37 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:38.170 07:19:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:38.170 ************************************ 00:36:38.170 START TEST fio_dif_digest 00:36:38.170 ************************************ 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.170 bdev_null0 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.170 [2024-10-16 07:19:37.506722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:38.170 { 00:36:38.170 "params": { 00:36:38.170 "name": "Nvme$subsystem", 00:36:38.170 "trtype": "$TEST_TRANSPORT", 00:36:38.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:38.170 "adrfam": "ipv4", 00:36:38.170 "trsvcid": "$NVMF_PORT", 00:36:38.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:38.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:38.170 "hdgst": ${hdgst:-false}, 00:36:38.170 "ddgst": ${ddgst:-false} 00:36:38.170 }, 00:36:38.170 "method": "bdev_nvme_attach_controller" 00:36:38.170 } 00:36:38.170 EOF 00:36:38.170 )") 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:38.170 "params": { 00:36:38.170 "name": "Nvme0", 00:36:38.170 "trtype": "tcp", 00:36:38.170 "traddr": "10.0.0.2", 00:36:38.170 "adrfam": "ipv4", 00:36:38.170 "trsvcid": "4420", 00:36:38.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:38.170 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:38.170 "hdgst": true, 00:36:38.170 "ddgst": true 00:36:38.170 }, 00:36:38.170 "method": "bdev_nvme_attach_controller" 00:36:38.170 }' 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:38.170 07:19:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.739 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:38.739 ... 00:36:38.739 fio-3.35 00:36:38.739 Starting 3 threads 00:36:50.975 00:36:50.975 filename0: (groupid=0, jobs=1): err= 0: pid=3435369: Wed Oct 16 07:19:48 2024 00:36:50.975 read: IOPS=147, BW=18.4MiB/s (19.3MB/s)(185MiB/10035msec) 00:36:50.975 slat (nsec): min=6031, max=38163, avg=8682.15, stdev=1783.32 00:36:50.975 clat (msec): min=6, max=131, avg=20.31, stdev=20.17 00:36:50.975 lat (msec): min=6, max=131, avg=20.32, stdev=20.17 00:36:50.975 clat percentiles (msec): 00:36:50.975 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:36:50.975 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:36:50.975 | 70.00th=[ 12], 80.00th=[ 50], 90.00th=[ 52], 95.00th=[ 53], 00:36:50.975 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 94], 99.95th=[ 132], 00:36:50.975 | 99.99th=[ 132] 00:36:50.975 bw ( KiB/s): min=10496, max=28160, per=17.53%, avg=18918.40, stdev=4018.55, samples=20 00:36:50.975 iops : min= 82, max= 220, avg=147.80, stdev=31.39, samples=20 00:36:50.975 lat (msec) : 10=38.49%, 20=38.89%, 50=4.93%, 100=17.62%, 250=0.07% 00:36:50.975 cpu : usr=95.83%, sys=3.93%, ctx=21, majf=0, minf=54 00:36:50.975 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.975 issued rwts: total=1481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.975 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:50.975 filename0: (groupid=0, jobs=1): err= 0: pid=3435370: Wed Oct 16 07:19:48 2024 00:36:50.975 read: IOPS=356, BW=44.6MiB/s (46.8MB/s)(446MiB/10004msec) 00:36:50.975 slat (nsec): min=6041, max=35886, avg=7362.40, stdev=1415.17 00:36:50.975 clat (usec): min=4364, max=12291, avg=8399.45, stdev=1296.83 00:36:50.975 lat (usec): min=4370, max=12298, avg=8406.82, stdev=1297.11 00:36:50.975 clat percentiles (usec): 00:36:50.975 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7242], 00:36:50.975 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8586], 00:36:50.975 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10683], 00:36:50.975 | 99.00th=[11338], 99.50th=[11600], 99.90th=[12256], 99.95th=[12256], 00:36:50.975 | 99.99th=[12256] 00:36:50.975 bw ( KiB/s): min=41728, max=47616, per=42.32%, avg=45662.32, stdev=1512.62, samples=19 00:36:50.975 iops : min= 326, max= 372, avg=356.74, stdev=11.82, samples=19 00:36:50.975 lat (msec) : 10=85.65%, 20=14.35% 00:36:50.975 cpu : usr=95.44%, sys=4.31%, ctx=24, majf=0, minf=132 00:36:50.975 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.975 issued rwts: total=3569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.975 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:50.975 filename0: (groupid=0, jobs=1): err= 0: pid=3435371: Wed Oct 16 07:19:48 2024 00:36:50.975 read: IOPS=340, BW=42.5MiB/s (44.6MB/s)(427MiB/10043msec) 00:36:50.975 slat (nsec): min=5892, max=33052, avg=7128.81, stdev=1055.24 00:36:50.975 clat (usec): min=5528, max=51628, avg=8802.03, stdev=2022.01 00:36:50.975 lat (usec): min=5535, max=51661, avg=8809.16, stdev=2022.43 00:36:50.975 clat percentiles (usec): 00:36:50.975 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7570], 00:36:50.975 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8979], 00:36:50.975 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10552], 95.00th=[10945], 00:36:50.975 | 99.00th=[11600], 99.50th=[11994], 99.90th=[47973], 99.95th=[51643], 00:36:50.975 | 99.99th=[51643] 00:36:50.975 bw ( KiB/s): min=39246, max=46080, per=40.50%, avg=43690.30, stdev=1734.17, samples=20 00:36:50.975 iops : min= 306, max= 360, avg=341.30, stdev=13.63, samples=20 00:36:50.975 lat (msec) : 10=80.20%, 20=19.65%, 50=0.06%, 100=0.09% 00:36:50.975 cpu : usr=95.63%, sys=4.14%, ctx=17, majf=0, minf=179 00:36:50.975 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:50.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:50.975 issued rwts: total=3415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:50.975 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:50.975 00:36:50.975 Run status group 0 (all jobs): 00:36:50.976 READ: bw=105MiB/s (110MB/s), 18.4MiB/s-44.6MiB/s (19.3MB/s-46.8MB/s), io=1058MiB (1110MB), run=10004-10043msec 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.976 00:36:50.976 real 0m11.107s 00:36:50.976 user 0m42.734s 00:36:50.976 sys 0m1.567s 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:50.976 07:19:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:50.976 ************************************ 00:36:50.976 END TEST fio_dif_digest 00:36:50.976 ************************************ 00:36:50.976 07:19:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:50.976 07:19:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.976 rmmod nvme_tcp 00:36:50.976 rmmod nvme_fabrics 00:36:50.976 rmmod nvme_keyring 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 3425184 ']' 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 3425184 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3425184 ']' 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3425184 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3425184 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3425184' 00:36:50.976 killing process with pid 3425184 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3425184 00:36:50.976 07:19:48 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3425184 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:50.976 07:19:48 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:52.988 Waiting for block devices as requested 00:36:52.988 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:52.988 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:52.988 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:53.262 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:53.262 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:53.262 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:53.262 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:53.523 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:53.523 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:53.785 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:53.785 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:53.785 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:54.046 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:54.046 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:54.046 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:54.308 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:54.308 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:54.569 07:19:53 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:54.569 07:19:53 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:54.569 07:19:53 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:54.569 07:19:53 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:36:54.569 07:19:53 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:54.569 07:19:53 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:36:54.569 07:19:53 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:54.569 07:19:53 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:54.569 07:19:53 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.569 07:19:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:54.569 07:19:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.121 07:19:56 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:57.121 00:36:57.121 real 1m18.117s 00:36:57.121 user 8m1.730s 00:36:57.121 sys 0m21.700s 00:36:57.121 07:19:56 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:57.121 07:19:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:57.121 ************************************ 00:36:57.121 END TEST nvmf_dif 00:36:57.121 ************************************ 00:36:57.121 07:19:56 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:57.121 07:19:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:57.121 07:19:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:57.121 07:19:56 -- common/autotest_common.sh@10 -- # set +x 00:36:57.121 ************************************ 00:36:57.121 START TEST nvmf_abort_qd_sizes 00:36:57.121 ************************************ 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:57.121 * Looking for test storage... 00:36:57.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:57.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.121 --rc genhtml_branch_coverage=1 00:36:57.121 --rc genhtml_function_coverage=1 00:36:57.121 --rc genhtml_legend=1 00:36:57.121 --rc geninfo_all_blocks=1 00:36:57.121 --rc geninfo_unexecuted_blocks=1 00:36:57.121 00:36:57.121 ' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:57.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.121 --rc genhtml_branch_coverage=1 00:36:57.121 --rc genhtml_function_coverage=1 00:36:57.121 --rc genhtml_legend=1 00:36:57.121 --rc geninfo_all_blocks=1 00:36:57.121 --rc geninfo_unexecuted_blocks=1 00:36:57.121 00:36:57.121 ' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:57.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.121 --rc genhtml_branch_coverage=1 00:36:57.121 --rc genhtml_function_coverage=1 00:36:57.121 --rc genhtml_legend=1 00:36:57.121 --rc geninfo_all_blocks=1 00:36:57.121 --rc geninfo_unexecuted_blocks=1 00:36:57.121 00:36:57.121 ' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:57.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.121 --rc genhtml_branch_coverage=1 00:36:57.121 --rc genhtml_function_coverage=1 00:36:57.121 --rc genhtml_legend=1 00:36:57.121 --rc geninfo_all_blocks=1 00:36:57.121 --rc geninfo_unexecuted_blocks=1 00:36:57.121 00:36:57.121 ' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:57.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:57.121 07:19:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:57.122 07:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:05.271 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:05.271 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.271 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:05.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:05.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:05.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:05.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:37:05.272 00:37:05.272 --- 10.0.0.2 ping statistics --- 00:37:05.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.272 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:05.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:05.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:37:05.272 00:37:05.272 --- 10.0.0.1 ping statistics --- 00:37:05.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.272 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:37:05.272 07:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:07.822 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:07.822 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:07.822 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:07.822 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:07.822 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:07.822 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:07.822 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:07.822 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:07.822 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:08.083 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:08.083 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:08.083 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:08.083 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:08.083 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:08.083 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:08.083 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:08.083 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:08.344 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:08.344 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:08.344 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:08.344 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:08.344 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:08.344 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=3444807 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 3444807 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3444807 ']' 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:08.606 07:20:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:08.606 [2024-10-16 07:20:07.914377] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:37:08.606 [2024-10-16 07:20:07.914442] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:08.606 [2024-10-16 07:20:08.005510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:08.606 [2024-10-16 07:20:08.059709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:08.606 [2024-10-16 07:20:08.059767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:08.606 [2024-10-16 07:20:08.059776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:08.606 [2024-10-16 07:20:08.059783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:08.606 [2024-10-16 07:20:08.059789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:08.606 [2024-10-16 07:20:08.061873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.606 [2024-10-16 07:20:08.061981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:08.606 [2024-10-16 07:20:08.062249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:08.606 [2024-10-16 07:20:08.062252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:09.552 07:20:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:09.552 ************************************ 00:37:09.552 START TEST spdk_target_abort 00:37:09.552 ************************************ 00:37:09.552 07:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:37:09.552 07:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:09.552 07:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:09.552 07:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.552 07:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:09.813 spdk_targetn1 00:37:09.813 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.813 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:09.813 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.813 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:09.813 [2024-10-16 07:20:09.157792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:09.813 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.813 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:09.813 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:09.814 [2024-10-16 07:20:09.207032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:09.814 07:20:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:10.075 [2024-10-16 07:20:09.370156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:296 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:10.076 [2024-10-16 07:20:09.370192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0028 p:1 m:0 dnr:0 00:37:10.076 [2024-10-16 07:20:09.377358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:512 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:10.076 [2024-10-16 07:20:09.377378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0042 p:1 m:0 dnr:0 00:37:10.076 [2024-10-16 07:20:09.393424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1040 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:10.076 [2024-10-16 07:20:09.393445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0084 p:1 m:0 dnr:0 00:37:10.076 [2024-10-16 07:20:09.401360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1328 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:10.076 [2024-10-16 07:20:09.401381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00a7 p:1 m:0 dnr:0 00:37:10.076 [2024-10-16 07:20:09.425468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2152 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:10.076 [2024-10-16 07:20:09.425489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:10.076 [2024-10-16 07:20:09.450162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3000 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:10.076 [2024-10-16 07:20:09.450183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:10.076 [2024-10-16 07:20:09.466108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3536 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:10.076 [2024-10-16 07:20:09.466129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00bd p:0 m:0 dnr:0 00:37:10.076 [2024-10-16 07:20:09.473346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3768 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:10.076 [2024-10-16 07:20:09.473366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00da p:0 m:0 dnr:0 00:37:13.380 Initializing NVMe Controllers 00:37:13.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:13.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:13.381 Initialization complete. Launching workers. 00:37:13.381 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10631, failed: 8 00:37:13.381 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2666, failed to submit 7973 00:37:13.381 success 669, unsuccessful 1997, failed 0 00:37:13.381 07:20:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:13.381 07:20:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:13.381 [2024-10-16 07:20:12.683133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:13.381 [2024-10-16 07:20:12.683171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:37:13.381 [2024-10-16 07:20:12.725886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1400 len:8 PRP1 0x200004e3a000 PRP2 0x0 00:37:13.381 [2024-10-16 07:20:12.725915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:00b7 p:1 m:0 dnr:0 00:37:13.381 [2024-10-16 07:20:12.786994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:2784 len:8 PRP1 0x200004e44000 PRP2 0x0 00:37:13.381 [2024-10-16 07:20:12.787020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:13.381 [2024-10-16 07:20:12.794987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:2896 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:37:13.381 [2024-10-16 07:20:12.795011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:13.381 [2024-10-16 07:20:12.810489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:3312 len:8 PRP1 0x200004e46000 PRP2 0x0 00:37:13.381 [2024-10-16 07:20:12.810512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00ac p:0 m:0 dnr:0 00:37:13.381 [2024-10-16 07:20:12.818006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:3568 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:13.381 [2024-10-16 07:20:12.818030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:00c0 p:0 m:0 dnr:0 00:37:16.685 Initializing NVMe Controllers 00:37:16.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:16.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:16.685 Initialization complete. Launching workers. 00:37:16.685 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8610, failed: 6 00:37:16.685 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1210, failed to submit 7406 00:37:16.685 success 339, unsuccessful 871, failed 0 00:37:16.685 07:20:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:16.685 07:20:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:18.071 [2024-10-16 07:20:17.377056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:145 nsid:1 lba:166048 len:8 PRP1 0x200004ace000 PRP2 0x0 00:37:18.071 [2024-10-16 07:20:17.377102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:145 cdw0:0 sqhd:00ab p:1 m:0 dnr:0 00:37:19.456 [2024-10-16 07:20:18.733259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:147 nsid:1 lba:321616 len:8 PRP1 0x200004afc000 PRP2 0x0 00:37:19.456 [2024-10-16 07:20:18.733285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:147 cdw0:0 sqhd:00a1 p:1 m:0 dnr:0 00:37:19.717 Initializing NVMe Controllers 00:37:19.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:19.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:19.717 Initialization complete. Launching workers. 00:37:19.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43271, failed: 2 00:37:19.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2670, failed to submit 40603 00:37:19.717 success 612, unsuccessful 2058, failed 0 00:37:19.717 07:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:19.717 07:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.717 07:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:19.717 07:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.717 07:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:19.717 07:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.717 07:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3444807 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3444807 ']' 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3444807 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3444807 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3444807' 00:37:21.634 killing process with pid 3444807 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3444807 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3444807 00:37:21.634 00:37:21.634 real 0m12.149s 00:37:21.634 user 0m49.513s 00:37:21.634 sys 0m2.073s 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:21.634 07:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:21.634 ************************************ 00:37:21.634 END TEST spdk_target_abort 00:37:21.634 ************************************ 00:37:21.634 07:20:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:21.634 07:20:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:21.634 07:20:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:21.634 07:20:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.634 ************************************ 00:37:21.634 START TEST kernel_target_abort 00:37:21.634 ************************************ 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:21.634 07:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:24.938 Waiting for block devices as requested 00:37:24.938 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:25.199 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:25.199 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:25.199 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:25.461 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:25.461 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:25.461 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:25.722 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:25.722 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:25.984 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:25.984 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:25.984 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:25.984 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:26.245 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:26.245 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:26.245 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:26.506 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:26.767 No valid GPT data, bailing 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:26.767 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:26.768 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:27.029 00:37:27.029 Discovery Log Number of Records 2, Generation counter 2 00:37:27.029 =====Discovery Log Entry 0====== 00:37:27.029 trtype: tcp 00:37:27.029 adrfam: ipv4 00:37:27.029 subtype: current discovery subsystem 00:37:27.029 treq: not specified, sq flow control disable supported 00:37:27.029 portid: 1 00:37:27.029 trsvcid: 4420 00:37:27.029 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:27.029 traddr: 10.0.0.1 00:37:27.029 eflags: none 00:37:27.029 sectype: none 00:37:27.029 =====Discovery Log Entry 1====== 00:37:27.029 trtype: tcp 00:37:27.029 adrfam: ipv4 00:37:27.029 subtype: nvme subsystem 00:37:27.029 treq: not specified, sq flow control disable supported 00:37:27.029 portid: 1 00:37:27.029 trsvcid: 4420 00:37:27.029 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:27.029 traddr: 10.0.0.1 00:37:27.029 eflags: none 00:37:27.029 sectype: none 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:27.029 07:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:30.335 Initializing NVMe Controllers 00:37:30.335 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:30.335 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:30.335 Initialization complete. Launching workers. 00:37:30.335 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67472, failed: 0 00:37:30.335 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67472, failed to submit 0 00:37:30.335 success 0, unsuccessful 67472, failed 0 00:37:30.335 07:20:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:30.335 07:20:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:33.642 Initializing NVMe Controllers 00:37:33.642 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:33.642 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:33.642 Initialization complete. Launching workers. 00:37:33.642 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 118780, failed: 0 00:37:33.642 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29898, failed to submit 88882 00:37:33.642 success 0, unsuccessful 29898, failed 0 00:37:33.642 07:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:33.642 07:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:36.187 Initializing NVMe Controllers 00:37:36.187 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:36.187 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:36.187 Initialization complete. Launching workers. 00:37:36.187 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145587, failed: 0 00:37:36.187 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36390, failed to submit 109197 00:37:36.187 success 0, unsuccessful 36390, failed 0 00:37:36.187 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:36.187 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:36.187 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:37:36.187 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:36.187 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:36.187 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:36.187 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:36.187 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:36.187 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:36.448 07:20:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:39.750 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:39.750 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:39.750 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:39.750 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:39.750 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:39.750 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:39.750 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:39.750 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:40.010 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:40.011 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:40.011 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:40.011 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:40.011 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:40.011 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:40.011 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:40.011 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:41.924 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:42.186 00:37:42.186 real 0m20.360s 00:37:42.186 user 0m9.905s 00:37:42.186 sys 0m6.110s 00:37:42.186 07:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:42.186 07:20:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:42.186 ************************************ 00:37:42.186 END TEST kernel_target_abort 00:37:42.186 ************************************ 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:42.186 rmmod nvme_tcp 00:37:42.186 rmmod nvme_fabrics 00:37:42.186 rmmod nvme_keyring 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 3444807 ']' 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 3444807 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3444807 ']' 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3444807 00:37:42.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3444807) - No such process 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3444807 is not found' 00:37:42.186 Process with pid 3444807 is not found 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:37:42.186 07:20:41 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:45.490 Waiting for block devices as requested 00:37:45.490 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:45.751 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:45.751 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:45.751 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:46.012 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:46.012 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:46.012 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:46.273 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:46.273 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:46.534 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:46.534 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:46.534 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:46.796 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:46.796 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:46.796 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:47.057 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:47.057 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:47.318 07:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:47.318 07:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:47.318 07:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:47.318 07:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:37:47.318 07:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:47.318 07:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:37:47.319 07:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.319 07:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.319 07:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.319 07:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:47.319 07:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.867 07:20:48 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:49.867 00:37:49.867 real 0m52.644s 00:37:49.867 user 1m4.777s 00:37:49.867 sys 0m19.593s 00:37:49.867 07:20:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:49.867 07:20:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.867 ************************************ 00:37:49.867 END TEST nvmf_abort_qd_sizes 00:37:49.867 ************************************ 00:37:49.867 07:20:48 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:49.867 07:20:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:49.867 07:20:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:49.867 07:20:48 -- common/autotest_common.sh@10 -- # set +x 00:37:49.867 ************************************ 00:37:49.867 START TEST keyring_file 00:37:49.867 ************************************ 00:37:49.867 07:20:48 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:49.867 * Looking for test storage... 00:37:49.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:49.867 07:20:48 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:49.867 07:20:48 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:37:49.867 07:20:48 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:49.867 07:20:49 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:49.867 07:20:49 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.867 07:20:49 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:49.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.867 --rc genhtml_branch_coverage=1 00:37:49.867 --rc genhtml_function_coverage=1 00:37:49.867 --rc genhtml_legend=1 00:37:49.867 --rc geninfo_all_blocks=1 00:37:49.867 --rc geninfo_unexecuted_blocks=1 00:37:49.867 00:37:49.867 ' 00:37:49.867 07:20:49 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:49.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.867 --rc genhtml_branch_coverage=1 00:37:49.867 --rc genhtml_function_coverage=1 00:37:49.867 --rc genhtml_legend=1 00:37:49.867 --rc geninfo_all_blocks=1 00:37:49.867 --rc geninfo_unexecuted_blocks=1 00:37:49.867 00:37:49.867 ' 00:37:49.867 07:20:49 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:49.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.867 --rc genhtml_branch_coverage=1 00:37:49.867 --rc genhtml_function_coverage=1 00:37:49.867 --rc genhtml_legend=1 00:37:49.867 --rc geninfo_all_blocks=1 00:37:49.867 --rc geninfo_unexecuted_blocks=1 00:37:49.867 00:37:49.867 ' 00:37:49.867 07:20:49 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:49.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.867 --rc genhtml_branch_coverage=1 00:37:49.867 --rc genhtml_function_coverage=1 00:37:49.867 --rc genhtml_legend=1 00:37:49.867 --rc geninfo_all_blocks=1 00:37:49.867 --rc geninfo_unexecuted_blocks=1 00:37:49.867 00:37:49.867 ' 00:37:49.867 07:20:49 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:49.867 07:20:49 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.867 07:20:49 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.867 07:20:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.867 07:20:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.867 07:20:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.867 07:20:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:49.867 07:20:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:49.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:49.867 07:20:49 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:49.867 07:20:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:49.867 07:20:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:49.867 07:20:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:49.867 07:20:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:49.867 07:20:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:49.867 07:20:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:49.867 07:20:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:49.867 07:20:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HuCR3DecKO 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HuCR3DecKO 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HuCR3DecKO 00:37:49.868 07:20:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HuCR3DecKO 00:37:49.868 07:20:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lXazNYnpno 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:49.868 07:20:49 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lXazNYnpno 00:37:49.868 07:20:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lXazNYnpno 00:37:49.868 07:20:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lXazNYnpno 00:37:49.868 07:20:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=3454991 00:37:49.868 07:20:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3454991 00:37:49.868 07:20:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:49.868 07:20:49 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3454991 ']' 00:37:49.868 07:20:49 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:49.868 07:20:49 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:49.868 07:20:49 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:49.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:49.868 07:20:49 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:49.868 07:20:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:49.868 [2024-10-16 07:20:49.305743] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:37:49.868 [2024-10-16 07:20:49.305819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454991 ] 00:37:50.129 [2024-10-16 07:20:49.386036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.129 [2024-10-16 07:20:49.439771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:50.700 07:20:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:50.700 [2024-10-16 07:20:50.125763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.700 null0 00:37:50.700 [2024-10-16 07:20:50.157804] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:50.700 [2024-10-16 07:20:50.158421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.700 07:20:50 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:50.700 [2024-10-16 07:20:50.189869] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:50.700 request: 00:37:50.700 { 00:37:50.700 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:50.700 "secure_channel": false, 00:37:50.700 "listen_address": { 00:37:50.700 "trtype": "tcp", 00:37:50.700 "traddr": "127.0.0.1", 00:37:50.700 "trsvcid": "4420" 00:37:50.700 }, 00:37:50.700 "method": "nvmf_subsystem_add_listener", 00:37:50.700 "req_id": 1 00:37:50.700 } 00:37:50.700 Got JSON-RPC error response 00:37:50.700 response: 00:37:50.700 { 00:37:50.700 "code": -32602, 00:37:50.700 "message": "Invalid parameters" 00:37:50.700 } 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:50.700 07:20:50 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:50.961 07:20:50 keyring_file -- keyring/file.sh@47 -- # bperfpid=3455030 00:37:50.961 07:20:50 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3455030 /var/tmp/bperf.sock 00:37:50.961 07:20:50 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:50.961 07:20:50 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3455030 ']' 00:37:50.961 07:20:50 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:50.961 07:20:50 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:50.961 07:20:50 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:50.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:50.961 07:20:50 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:50.961 07:20:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:50.961 [2024-10-16 07:20:50.250209] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:37:50.961 [2024-10-16 07:20:50.250276] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455030 ] 00:37:50.961 [2024-10-16 07:20:50.330773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.961 [2024-10-16 07:20:50.383966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.922 07:20:51 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:51.922 07:20:51 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:51.922 07:20:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HuCR3DecKO 00:37:51.922 07:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HuCR3DecKO 00:37:51.922 07:20:51 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lXazNYnpno 00:37:51.922 07:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lXazNYnpno 00:37:52.210 07:20:51 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:52.210 07:20:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:52.211 07:20:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.211 07:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.211 07:20:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:52.211 07:20:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.HuCR3DecKO == \/\t\m\p\/\t\m\p\.\H\u\C\R\3\D\e\c\K\O ]] 00:37:52.211 07:20:51 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:52.211 07:20:51 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:52.211 07:20:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:52.211 07:20:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.211 07:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.520 07:20:51 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.lXazNYnpno == \/\t\m\p\/\t\m\p\.\l\X\a\z\N\Y\n\p\n\o ]] 00:37:52.520 07:20:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:52.520 07:20:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:52.520 07:20:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:52.520 07:20:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.520 07:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.520 07:20:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:52.781 07:20:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:52.781 07:20:52 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:52.781 07:20:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:52.781 07:20:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:52.781 07:20:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.781 07:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.781 07:20:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:52.781 07:20:52 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:52.781 07:20:52 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:52.782 07:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:53.043 [2024-10-16 07:20:52.442728] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:53.043 nvme0n1 00:37:53.303 07:20:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:53.303 07:20:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:53.303 07:20:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:53.303 07:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:53.564 07:20:52 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:53.564 07:20:52 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:53.564 Running I/O for 1 seconds... 00:37:54.947 18683.00 IOPS, 72.98 MiB/s 00:37:54.947 Latency(us) 00:37:54.947 [2024-10-16T05:20:54.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.947 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:54.947 nvme0n1 : 1.00 18740.42 73.20 0.00 0.00 6817.73 3741.01 20971.52 00:37:54.947 [2024-10-16T05:20:54.446Z] =================================================================================================================== 00:37:54.947 [2024-10-16T05:20:54.446Z] Total : 18740.42 73.20 0.00 0.00 6817.73 3741.01 20971.52 00:37:54.947 { 00:37:54.947 "results": [ 00:37:54.947 { 00:37:54.947 "job": "nvme0n1", 00:37:54.947 "core_mask": "0x2", 00:37:54.947 "workload": "randrw", 00:37:54.947 "percentage": 50, 00:37:54.947 "status": "finished", 00:37:54.947 "queue_depth": 128, 00:37:54.947 "io_size": 4096, 00:37:54.947 "runtime": 1.003766, 00:37:54.947 "iops": 18740.42356485476, 00:37:54.947 "mibps": 73.2047795502139, 00:37:54.947 "io_failed": 0, 00:37:54.947 "io_timeout": 0, 00:37:54.947 "avg_latency_us": 6817.728237024436, 00:37:54.947 "min_latency_us": 3741.0133333333333, 00:37:54.947 "max_latency_us": 20971.52 00:37:54.947 } 00:37:54.947 ], 00:37:54.947 "core_count": 1 00:37:54.947 } 00:37:54.947 07:20:54 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:54.947 07:20:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:54.947 07:20:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:54.947 07:20:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:54.947 07:20:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:55.207 07:20:54 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:55.207 07:20:54 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:55.207 07:20:54 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:55.207 07:20:54 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:55.207 07:20:54 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:55.207 07:20:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:55.207 07:20:54 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:55.207 07:20:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:55.207 07:20:54 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:55.207 07:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:55.468 [2024-10-16 07:20:54.781620] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:55.468 [2024-10-16 07:20:54.782433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf63cc0 (107): Transport endpoint is not connected 00:37:55.468 [2024-10-16 07:20:54.783428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf63cc0 (9): Bad file descriptor 00:37:55.468 [2024-10-16 07:20:54.784431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:55.468 [2024-10-16 07:20:54.784439] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:55.468 [2024-10-16 07:20:54.784445] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:55.468 [2024-10-16 07:20:54.784452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:55.468 request: 00:37:55.468 { 00:37:55.468 "name": "nvme0", 00:37:55.468 "trtype": "tcp", 00:37:55.468 "traddr": "127.0.0.1", 00:37:55.468 "adrfam": "ipv4", 00:37:55.468 "trsvcid": "4420", 00:37:55.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:55.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:55.468 "prchk_reftag": false, 00:37:55.468 "prchk_guard": false, 00:37:55.468 "hdgst": false, 00:37:55.468 "ddgst": false, 00:37:55.468 "psk": "key1", 00:37:55.468 "allow_unrecognized_csi": false, 00:37:55.468 "method": "bdev_nvme_attach_controller", 00:37:55.468 "req_id": 1 00:37:55.468 } 00:37:55.468 Got JSON-RPC error response 00:37:55.468 response: 00:37:55.468 { 00:37:55.468 "code": -5, 00:37:55.468 "message": "Input/output error" 00:37:55.468 } 00:37:55.468 07:20:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:55.468 07:20:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:55.468 07:20:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:55.468 07:20:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:55.468 07:20:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:55.468 07:20:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:55.468 07:20:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:55.468 07:20:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:55.468 07:20:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.468 07:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.729 07:20:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:55.729 07:20:54 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:55.729 07:20:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:55.729 07:20:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:55.729 07:20:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.729 07:20:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:55.729 07:20:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.729 07:20:55 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:55.729 07:20:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:55.729 07:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:55.991 07:20:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:55.991 07:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:56.252 07:20:55 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:56.252 07:20:55 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:56.252 07:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:56.252 07:20:55 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:56.252 07:20:55 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.HuCR3DecKO 00:37:56.252 07:20:55 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HuCR3DecKO 00:37:56.252 07:20:55 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:56.252 07:20:55 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HuCR3DecKO 00:37:56.252 07:20:55 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:56.252 07:20:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:56.252 07:20:55 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:56.252 07:20:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:56.252 07:20:55 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HuCR3DecKO 00:37:56.252 07:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HuCR3DecKO 00:37:56.513 [2024-10-16 07:20:55.860311] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HuCR3DecKO': 0100660 00:37:56.513 [2024-10-16 07:20:55.860331] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:56.513 request: 00:37:56.513 { 00:37:56.513 "name": "key0", 00:37:56.513 "path": "/tmp/tmp.HuCR3DecKO", 00:37:56.513 "method": "keyring_file_add_key", 00:37:56.513 "req_id": 1 00:37:56.513 } 00:37:56.513 Got JSON-RPC error response 00:37:56.513 response: 00:37:56.513 { 00:37:56.513 "code": -1, 00:37:56.513 "message": "Operation not permitted" 00:37:56.513 } 00:37:56.513 07:20:55 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:56.513 07:20:55 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:56.513 07:20:55 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:56.513 07:20:55 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:56.513 07:20:55 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.HuCR3DecKO 00:37:56.513 07:20:55 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HuCR3DecKO 00:37:56.513 07:20:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HuCR3DecKO 00:37:56.774 07:20:56 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.HuCR3DecKO 00:37:56.774 07:20:56 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:56.774 07:20:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:56.774 07:20:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:56.774 07:20:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:56.774 07:20:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:56.774 07:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:56.774 07:20:56 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:56.774 07:20:56 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:56.774 07:20:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:56.774 07:20:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:56.774 07:20:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:56.774 07:20:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:56.774 07:20:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:56.774 07:20:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:56.774 07:20:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:56.774 07:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:57.033 [2024-10-16 07:20:56.373617] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HuCR3DecKO': No such file or directory 00:37:57.033 [2024-10-16 07:20:56.373631] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:57.033 [2024-10-16 07:20:56.373644] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:57.033 [2024-10-16 07:20:56.373650] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:57.033 [2024-10-16 07:20:56.373656] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:57.033 [2024-10-16 07:20:56.373662] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:57.033 request: 00:37:57.033 { 00:37:57.033 "name": "nvme0", 00:37:57.033 "trtype": "tcp", 00:37:57.033 "traddr": "127.0.0.1", 00:37:57.033 "adrfam": "ipv4", 00:37:57.033 "trsvcid": "4420", 00:37:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:57.033 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:57.033 "prchk_reftag": false, 00:37:57.033 "prchk_guard": false, 00:37:57.033 "hdgst": false, 00:37:57.033 "ddgst": false, 00:37:57.033 "psk": "key0", 00:37:57.033 "allow_unrecognized_csi": false, 00:37:57.033 "method": "bdev_nvme_attach_controller", 00:37:57.033 "req_id": 1 00:37:57.033 } 00:37:57.033 Got JSON-RPC error response 00:37:57.033 response: 00:37:57.033 { 00:37:57.033 "code": -19, 00:37:57.033 "message": "No such device" 00:37:57.033 } 00:37:57.033 07:20:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:57.033 07:20:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:57.033 07:20:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:57.033 07:20:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:57.033 07:20:56 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:57.033 07:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:57.293 07:20:56 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S5IFQkE97M 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:57.293 07:20:56 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:57.293 07:20:56 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:57.293 07:20:56 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:57.293 07:20:56 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:57.293 07:20:56 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:57.293 07:20:56 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S5IFQkE97M 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S5IFQkE97M 00:37:57.293 07:20:56 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.S5IFQkE97M 00:37:57.293 07:20:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S5IFQkE97M 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S5IFQkE97M 00:37:57.293 07:20:56 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:57.293 07:20:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:57.553 nvme0n1 00:37:57.553 07:20:57 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:57.553 07:20:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:57.553 07:20:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:57.553 07:20:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:57.553 07:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:57.553 07:20:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:57.814 07:20:57 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:57.814 07:20:57 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:57.814 07:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:58.075 07:20:57 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:58.075 07:20:57 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:58.075 07:20:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:58.075 07:20:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:58.075 07:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.075 07:20:57 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:58.075 07:20:57 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:58.075 07:20:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:58.075 07:20:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:58.075 07:20:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:58.075 07:20:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:58.075 07:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.336 07:20:57 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:58.336 07:20:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:58.336 07:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:58.597 07:20:57 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:58.597 07:20:57 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:58.597 07:20:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.597 07:20:58 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:58.597 07:20:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S5IFQkE97M 00:37:58.597 07:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S5IFQkE97M 00:37:58.857 07:20:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lXazNYnpno 00:37:58.857 07:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lXazNYnpno 00:37:59.118 07:20:58 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:59.118 07:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:59.118 nvme0n1 00:37:59.379 07:20:58 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:59.379 07:20:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:59.379 07:20:58 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:59.379 "subsystems": [ 00:37:59.379 { 00:37:59.379 "subsystem": "keyring", 00:37:59.379 "config": [ 00:37:59.379 { 00:37:59.379 "method": "keyring_file_add_key", 00:37:59.379 "params": { 00:37:59.379 "name": "key0", 00:37:59.379 "path": "/tmp/tmp.S5IFQkE97M" 00:37:59.379 } 00:37:59.379 }, 00:37:59.379 { 00:37:59.379 "method": "keyring_file_add_key", 00:37:59.379 "params": { 00:37:59.379 "name": "key1", 00:37:59.379 "path": "/tmp/tmp.lXazNYnpno" 00:37:59.379 } 00:37:59.379 } 00:37:59.379 ] 00:37:59.379 }, 00:37:59.379 { 00:37:59.379 "subsystem": "iobuf", 00:37:59.379 "config": [ 00:37:59.379 { 00:37:59.379 "method": "iobuf_set_options", 00:37:59.379 "params": { 00:37:59.379 "small_pool_count": 8192, 00:37:59.379 "large_pool_count": 1024, 00:37:59.379 "small_bufsize": 8192, 00:37:59.379 "large_bufsize": 135168 00:37:59.379 } 00:37:59.379 } 00:37:59.379 ] 00:37:59.379 }, 00:37:59.379 { 00:37:59.379 "subsystem": "sock", 00:37:59.379 "config": [ 00:37:59.379 { 00:37:59.379 "method": "sock_set_default_impl", 00:37:59.379 "params": { 00:37:59.379 "impl_name": "posix" 00:37:59.379 } 00:37:59.379 }, 00:37:59.379 { 00:37:59.379 "method": "sock_impl_set_options", 00:37:59.379 "params": { 00:37:59.379 "impl_name": "ssl", 00:37:59.379 "recv_buf_size": 4096, 00:37:59.379 "send_buf_size": 4096, 00:37:59.379 "enable_recv_pipe": true, 00:37:59.379 "enable_quickack": false, 00:37:59.379 "enable_placement_id": 0, 00:37:59.380 "enable_zerocopy_send_server": true, 00:37:59.380 "enable_zerocopy_send_client": false, 00:37:59.380 "zerocopy_threshold": 0, 00:37:59.380 "tls_version": 0, 00:37:59.380 "enable_ktls": false 00:37:59.380 } 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "method": "sock_impl_set_options", 00:37:59.380 "params": { 00:37:59.380 "impl_name": "posix", 00:37:59.380 "recv_buf_size": 2097152, 00:37:59.380 "send_buf_size": 2097152, 00:37:59.380 "enable_recv_pipe": true, 00:37:59.380 "enable_quickack": false, 00:37:59.380 "enable_placement_id": 0, 00:37:59.380 "enable_zerocopy_send_server": true, 00:37:59.380 "enable_zerocopy_send_client": false, 00:37:59.380 "zerocopy_threshold": 0, 00:37:59.380 "tls_version": 0, 00:37:59.380 "enable_ktls": false 00:37:59.380 } 00:37:59.380 } 00:37:59.380 ] 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "subsystem": "vmd", 00:37:59.380 "config": [] 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "subsystem": "accel", 00:37:59.380 "config": [ 00:37:59.380 { 00:37:59.380 "method": "accel_set_options", 00:37:59.380 "params": { 00:37:59.380 "small_cache_size": 128, 00:37:59.380 "large_cache_size": 16, 00:37:59.380 "task_count": 2048, 00:37:59.380 "sequence_count": 2048, 00:37:59.380 "buf_count": 2048 00:37:59.380 } 00:37:59.380 } 00:37:59.380 ] 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "subsystem": "bdev", 00:37:59.380 "config": [ 00:37:59.380 { 00:37:59.380 "method": "bdev_set_options", 00:37:59.380 "params": { 00:37:59.380 "bdev_io_pool_size": 65535, 00:37:59.380 "bdev_io_cache_size": 256, 00:37:59.380 "bdev_auto_examine": true, 00:37:59.380 "iobuf_small_cache_size": 128, 00:37:59.380 "iobuf_large_cache_size": 16 00:37:59.380 } 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "method": "bdev_raid_set_options", 00:37:59.380 "params": { 00:37:59.380 "process_window_size_kb": 1024, 00:37:59.380 "process_max_bandwidth_mb_sec": 0 00:37:59.380 } 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "method": "bdev_iscsi_set_options", 00:37:59.380 "params": { 00:37:59.380 "timeout_sec": 30 00:37:59.380 } 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "method": "bdev_nvme_set_options", 00:37:59.380 "params": { 00:37:59.380 "action_on_timeout": "none", 00:37:59.380 "timeout_us": 0, 00:37:59.380 "timeout_admin_us": 0, 00:37:59.380 "keep_alive_timeout_ms": 10000, 00:37:59.380 "arbitration_burst": 0, 00:37:59.380 "low_priority_weight": 0, 00:37:59.380 "medium_priority_weight": 0, 00:37:59.380 "high_priority_weight": 0, 00:37:59.380 "nvme_adminq_poll_period_us": 10000, 00:37:59.380 "nvme_ioq_poll_period_us": 0, 00:37:59.380 "io_queue_requests": 512, 00:37:59.380 "delay_cmd_submit": true, 00:37:59.380 "transport_retry_count": 4, 00:37:59.380 "bdev_retry_count": 3, 00:37:59.380 "transport_ack_timeout": 0, 00:37:59.380 "ctrlr_loss_timeout_sec": 0, 00:37:59.380 "reconnect_delay_sec": 0, 00:37:59.380 "fast_io_fail_timeout_sec": 0, 00:37:59.380 "disable_auto_failback": false, 00:37:59.380 "generate_uuids": false, 00:37:59.380 "transport_tos": 0, 00:37:59.380 "nvme_error_stat": false, 00:37:59.380 "rdma_srq_size": 0, 00:37:59.380 "io_path_stat": false, 00:37:59.380 "allow_accel_sequence": false, 00:37:59.380 "rdma_max_cq_size": 0, 00:37:59.380 "rdma_cm_event_timeout_ms": 0, 00:37:59.380 "dhchap_digests": [ 00:37:59.380 "sha256", 00:37:59.380 "sha384", 00:37:59.380 "sha512" 00:37:59.380 ], 00:37:59.380 "dhchap_dhgroups": [ 00:37:59.380 "null", 00:37:59.380 "ffdhe2048", 00:37:59.380 "ffdhe3072", 00:37:59.380 "ffdhe4096", 00:37:59.380 "ffdhe6144", 00:37:59.380 "ffdhe8192" 00:37:59.380 ] 00:37:59.380 } 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "method": "bdev_nvme_attach_controller", 00:37:59.380 "params": { 00:37:59.380 "name": "nvme0", 00:37:59.380 "trtype": "TCP", 00:37:59.380 "adrfam": "IPv4", 00:37:59.380 "traddr": "127.0.0.1", 00:37:59.380 "trsvcid": "4420", 00:37:59.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:59.380 "prchk_reftag": false, 00:37:59.380 "prchk_guard": false, 00:37:59.380 "ctrlr_loss_timeout_sec": 0, 00:37:59.380 "reconnect_delay_sec": 0, 00:37:59.380 "fast_io_fail_timeout_sec": 0, 00:37:59.380 "psk": "key0", 00:37:59.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:59.380 "hdgst": false, 00:37:59.380 "ddgst": false, 00:37:59.380 "multipath": "multipath" 00:37:59.380 } 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "method": "bdev_nvme_set_hotplug", 00:37:59.380 "params": { 00:37:59.380 "period_us": 100000, 00:37:59.380 "enable": false 00:37:59.380 } 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "method": "bdev_wait_for_examine" 00:37:59.380 } 00:37:59.380 ] 00:37:59.380 }, 00:37:59.380 { 00:37:59.380 "subsystem": "nbd", 00:37:59.380 "config": [] 00:37:59.380 } 00:37:59.380 ] 00:37:59.380 }' 00:37:59.380 07:20:58 keyring_file -- keyring/file.sh@115 -- # killprocess 3455030 00:37:59.380 07:20:58 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3455030 ']' 00:37:59.380 07:20:58 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3455030 00:37:59.380 07:20:58 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:59.641 07:20:58 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:59.641 07:20:58 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3455030 00:37:59.641 07:20:58 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:59.641 07:20:58 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:59.641 07:20:58 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3455030' 00:37:59.641 killing process with pid 3455030 00:37:59.641 07:20:58 keyring_file -- common/autotest_common.sh@969 -- # kill 3455030 00:37:59.641 Received shutdown signal, test time was about 1.000000 seconds 00:37:59.641 00:37:59.641 Latency(us) 00:37:59.641 [2024-10-16T05:20:59.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:59.641 [2024-10-16T05:20:59.140Z] =================================================================================================================== 00:37:59.641 [2024-10-16T05:20:59.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:59.641 07:20:58 keyring_file -- common/autotest_common.sh@974 -- # wait 3455030 00:37:59.641 07:20:59 keyring_file -- keyring/file.sh@118 -- # bperfpid=3456842 00:37:59.641 07:20:59 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3456842 /var/tmp/bperf.sock 00:37:59.641 07:20:59 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3456842 ']' 00:37:59.641 07:20:59 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:59.641 07:20:59 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:59.641 07:20:59 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:59.641 07:20:59 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:59.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:59.641 07:20:59 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:59.641 07:20:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:59.641 07:20:59 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:59.641 "subsystems": [ 00:37:59.641 { 00:37:59.641 "subsystem": "keyring", 00:37:59.641 "config": [ 00:37:59.641 { 00:37:59.641 "method": "keyring_file_add_key", 00:37:59.641 "params": { 00:37:59.641 "name": "key0", 00:37:59.641 "path": "/tmp/tmp.S5IFQkE97M" 00:37:59.641 } 00:37:59.641 }, 00:37:59.641 { 00:37:59.641 "method": "keyring_file_add_key", 00:37:59.641 "params": { 00:37:59.641 "name": "key1", 00:37:59.641 "path": "/tmp/tmp.lXazNYnpno" 00:37:59.641 } 00:37:59.641 } 00:37:59.641 ] 00:37:59.641 }, 00:37:59.641 { 00:37:59.641 "subsystem": "iobuf", 00:37:59.641 "config": [ 00:37:59.641 { 00:37:59.641 "method": "iobuf_set_options", 00:37:59.641 "params": { 00:37:59.641 "small_pool_count": 8192, 00:37:59.641 "large_pool_count": 1024, 00:37:59.641 "small_bufsize": 8192, 00:37:59.641 "large_bufsize": 135168 00:37:59.641 } 00:37:59.641 } 00:37:59.641 ] 00:37:59.641 }, 00:37:59.641 { 00:37:59.641 "subsystem": "sock", 00:37:59.641 "config": [ 00:37:59.641 { 00:37:59.641 "method": "sock_set_default_impl", 00:37:59.641 "params": { 00:37:59.641 "impl_name": "posix" 00:37:59.641 } 00:37:59.641 }, 00:37:59.641 { 00:37:59.641 "method": "sock_impl_set_options", 00:37:59.641 "params": { 00:37:59.641 "impl_name": "ssl", 00:37:59.641 "recv_buf_size": 4096, 00:37:59.641 "send_buf_size": 4096, 00:37:59.641 "enable_recv_pipe": true, 00:37:59.641 "enable_quickack": false, 00:37:59.641 "enable_placement_id": 0, 00:37:59.641 "enable_zerocopy_send_server": true, 00:37:59.641 "enable_zerocopy_send_client": false, 00:37:59.641 "zerocopy_threshold": 0, 00:37:59.641 "tls_version": 0, 00:37:59.641 "enable_ktls": false 00:37:59.641 } 00:37:59.641 }, 00:37:59.641 { 00:37:59.641 "method": "sock_impl_set_options", 00:37:59.641 "params": { 00:37:59.641 "impl_name": "posix", 00:37:59.641 "recv_buf_size": 2097152, 00:37:59.641 "send_buf_size": 2097152, 00:37:59.641 "enable_recv_pipe": true, 00:37:59.641 "enable_quickack": false, 00:37:59.641 "enable_placement_id": 0, 00:37:59.641 "enable_zerocopy_send_server": true, 00:37:59.641 "enable_zerocopy_send_client": false, 00:37:59.641 "zerocopy_threshold": 0, 00:37:59.641 "tls_version": 0, 00:37:59.641 "enable_ktls": false 00:37:59.641 } 00:37:59.641 } 00:37:59.641 ] 00:37:59.641 }, 00:37:59.641 { 00:37:59.641 "subsystem": "vmd", 00:37:59.641 "config": [] 00:37:59.641 }, 00:37:59.641 { 00:37:59.641 "subsystem": "accel", 00:37:59.641 "config": [ 00:37:59.641 { 00:37:59.641 "method": "accel_set_options", 00:37:59.641 "params": { 00:37:59.641 "small_cache_size": 128, 00:37:59.641 "large_cache_size": 16, 00:37:59.641 "task_count": 2048, 00:37:59.641 "sequence_count": 2048, 00:37:59.641 "buf_count": 2048 00:37:59.641 } 00:37:59.641 } 00:37:59.642 ] 00:37:59.642 }, 00:37:59.642 { 00:37:59.642 "subsystem": "bdev", 00:37:59.642 "config": [ 00:37:59.642 { 00:37:59.642 "method": "bdev_set_options", 00:37:59.642 "params": { 00:37:59.642 "bdev_io_pool_size": 65535, 00:37:59.642 "bdev_io_cache_size": 256, 00:37:59.642 "bdev_auto_examine": true, 00:37:59.642 "iobuf_small_cache_size": 128, 00:37:59.642 "iobuf_large_cache_size": 16 00:37:59.642 } 00:37:59.642 }, 00:37:59.642 { 00:37:59.642 "method": "bdev_raid_set_options", 00:37:59.642 "params": { 00:37:59.642 "process_window_size_kb": 1024, 00:37:59.642 "process_max_bandwidth_mb_sec": 0 00:37:59.642 } 00:37:59.642 }, 00:37:59.642 { 00:37:59.642 "method": "bdev_iscsi_set_options", 00:37:59.642 "params": { 00:37:59.642 "timeout_sec": 30 00:37:59.642 } 00:37:59.642 }, 00:37:59.642 { 00:37:59.642 "method": "bdev_nvme_set_options", 00:37:59.642 "params": { 00:37:59.642 "action_on_timeout": "none", 00:37:59.642 "timeout_us": 0, 00:37:59.642 "timeout_admin_us": 0, 00:37:59.642 "keep_alive_timeout_ms": 10000, 00:37:59.642 "arbitration_burst": 0, 00:37:59.642 "low_priority_weight": 0, 00:37:59.642 "medium_priority_weight": 0, 00:37:59.642 "high_priority_weight": 0, 00:37:59.642 "nvme_adminq_poll_period_us": 10000, 00:37:59.642 "nvme_ioq_poll_period_us": 0, 00:37:59.642 "io_queue_requests": 512, 00:37:59.642 "delay_cmd_submit": true, 00:37:59.642 "transport_retry_count": 4, 00:37:59.642 "bdev_retry_count": 3, 00:37:59.642 "transport_ack_timeout": 0, 00:37:59.642 "ctrlr_loss_timeout_sec": 0, 00:37:59.642 "reconnect_delay_sec": 0, 00:37:59.642 "fast_io_fail_timeout_sec": 0, 00:37:59.642 "disable_auto_failback": false, 00:37:59.642 "generate_uuids": false, 00:37:59.642 "transport_tos": 0, 00:37:59.642 "nvme_error_stat": false, 00:37:59.642 "rdma_srq_size": 0, 00:37:59.642 "io_path_stat": false, 00:37:59.642 "allow_accel_sequence": false, 00:37:59.642 "rdma_max_cq_size": 0, 00:37:59.642 "rdma_cm_event_timeout_ms": 0, 00:37:59.642 "dhchap_digests": [ 00:37:59.642 "sha256", 00:37:59.642 "sha384", 00:37:59.642 "sha512" 00:37:59.642 ], 00:37:59.642 "dhchap_dhgroups": [ 00:37:59.642 "null", 00:37:59.642 "ffdhe2048", 00:37:59.642 "ffdhe3072", 00:37:59.642 "ffdhe4096", 00:37:59.642 "ffdhe6144", 00:37:59.642 "ffdhe8192" 00:37:59.642 ] 00:37:59.642 } 00:37:59.642 }, 00:37:59.642 { 00:37:59.642 "method": "bdev_nvme_attach_controller", 00:37:59.642 "params": { 00:37:59.642 "name": "nvme0", 00:37:59.642 "trtype": "TCP", 00:37:59.642 "adrfam": "IPv4", 00:37:59.642 "traddr": "127.0.0.1", 00:37:59.642 "trsvcid": "4420", 00:37:59.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:59.642 "prchk_reftag": false, 00:37:59.642 "prchk_guard": false, 00:37:59.642 "ctrlr_loss_timeout_sec": 0, 00:37:59.642 "reconnect_delay_sec": 0, 00:37:59.642 "fast_io_fail_timeout_sec": 0, 00:37:59.642 "psk": "key0", 00:37:59.642 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:59.642 "hdgst": false, 00:37:59.642 "ddgst": false, 00:37:59.642 "multipath": "multipath" 00:37:59.642 } 00:37:59.642 }, 00:37:59.642 { 00:37:59.642 "method": "bdev_nvme_set_hotplug", 00:37:59.642 "params": { 00:37:59.642 "period_us": 100000, 00:37:59.642 "enable": false 00:37:59.642 } 00:37:59.642 }, 00:37:59.642 { 00:37:59.642 "method": "bdev_wait_for_examine" 00:37:59.642 } 00:37:59.642 ] 00:37:59.642 }, 00:37:59.642 { 00:37:59.642 "subsystem": "nbd", 00:37:59.642 "config": [] 00:37:59.642 } 00:37:59.642 ] 00:37:59.642 }' 00:37:59.642 [2024-10-16 07:20:59.091630] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:37:59.642 [2024-10-16 07:20:59.091687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456842 ] 00:37:59.903 [2024-10-16 07:20:59.166108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.903 [2024-10-16 07:20:59.195414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.903 [2024-10-16 07:20:59.338113] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:00.475 07:20:59 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:00.475 07:20:59 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:00.475 07:20:59 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:00.475 07:20:59 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:00.475 07:20:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.736 07:21:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:00.736 07:21:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:00.736 07:21:00 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:00.736 07:21:00 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:00.736 07:21:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.995 07:21:00 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:00.995 07:21:00 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:00.995 07:21:00 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:00.995 07:21:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:01.255 07:21:00 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:01.255 07:21:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:01.255 07:21:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.S5IFQkE97M /tmp/tmp.lXazNYnpno 00:38:01.255 07:21:00 keyring_file -- keyring/file.sh@20 -- # killprocess 3456842 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3456842 ']' 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3456842 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3456842 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3456842' 00:38:01.255 killing process with pid 3456842 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@969 -- # kill 3456842 00:38:01.255 Received shutdown signal, test time was about 1.000000 seconds 00:38:01.255 00:38:01.255 Latency(us) 00:38:01.255 [2024-10-16T05:21:00.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.255 [2024-10-16T05:21:00.754Z] =================================================================================================================== 00:38:01.255 [2024-10-16T05:21:00.754Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:01.255 07:21:00 keyring_file -- common/autotest_common.sh@974 -- # wait 3456842 00:38:01.516 07:21:00 keyring_file -- keyring/file.sh@21 -- # killprocess 3454991 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3454991 ']' 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3454991 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3454991 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3454991' 00:38:01.516 killing process with pid 3454991 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@969 -- # kill 3454991 00:38:01.516 07:21:00 keyring_file -- common/autotest_common.sh@974 -- # wait 3454991 00:38:01.777 00:38:01.777 real 0m12.135s 00:38:01.777 user 0m29.282s 00:38:01.777 sys 0m2.708s 00:38:01.777 07:21:01 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:01.777 07:21:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:01.777 ************************************ 00:38:01.777 END TEST keyring_file 00:38:01.777 ************************************ 00:38:01.777 07:21:01 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:01.777 07:21:01 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:01.777 07:21:01 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:01.777 07:21:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:01.777 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:38:01.778 ************************************ 00:38:01.778 START TEST keyring_linux 00:38:01.778 ************************************ 00:38:01.778 07:21:01 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:01.778 Joined session keyring: 536815257 00:38:01.778 * Looking for test storage... 00:38:01.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:01.778 07:21:01 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:01.778 07:21:01 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:38:01.778 07:21:01 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:02.039 07:21:01 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:02.039 07:21:01 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:02.039 07:21:01 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:02.039 07:21:01 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:02.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.039 --rc genhtml_branch_coverage=1 00:38:02.039 --rc genhtml_function_coverage=1 00:38:02.039 --rc genhtml_legend=1 00:38:02.039 --rc geninfo_all_blocks=1 00:38:02.039 --rc geninfo_unexecuted_blocks=1 00:38:02.039 00:38:02.039 ' 00:38:02.039 07:21:01 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:02.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.039 --rc genhtml_branch_coverage=1 00:38:02.039 --rc genhtml_function_coverage=1 00:38:02.039 --rc genhtml_legend=1 00:38:02.039 --rc geninfo_all_blocks=1 00:38:02.039 --rc geninfo_unexecuted_blocks=1 00:38:02.039 00:38:02.039 ' 00:38:02.039 07:21:01 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:02.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.039 --rc genhtml_branch_coverage=1 00:38:02.039 --rc genhtml_function_coverage=1 00:38:02.039 --rc genhtml_legend=1 00:38:02.039 --rc geninfo_all_blocks=1 00:38:02.039 --rc geninfo_unexecuted_blocks=1 00:38:02.039 00:38:02.039 ' 00:38:02.039 07:21:01 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:02.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.039 --rc genhtml_branch_coverage=1 00:38:02.039 --rc genhtml_function_coverage=1 00:38:02.039 --rc genhtml_legend=1 00:38:02.039 --rc geninfo_all_blocks=1 00:38:02.039 --rc geninfo_unexecuted_blocks=1 00:38:02.039 00:38:02.039 ' 00:38:02.039 07:21:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:02.039 07:21:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:02.039 07:21:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.040 07:21:01 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:02.040 07:21:01 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.040 07:21:01 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.040 07:21:01 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.040 07:21:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.040 07:21:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.040 07:21:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.040 07:21:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:02.040 07:21:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:02.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@731 -- # python - 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:02.040 /tmp/:spdk-test:key0 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:38:02.040 07:21:01 keyring_linux -- nvmf/common.sh@731 -- # python - 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:02.040 07:21:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:02.040 /tmp/:spdk-test:key1 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3457392 00:38:02.040 07:21:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3457392 00:38:02.040 07:21:01 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3457392 ']' 00:38:02.040 07:21:01 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.040 07:21:01 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:02.040 07:21:01 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.040 07:21:01 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:02.040 07:21:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:02.040 [2024-10-16 07:21:01.481173] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:38:02.040 [2024-10-16 07:21:01.481257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457392 ] 00:38:02.301 [2024-10-16 07:21:01.559419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.301 [2024-10-16 07:21:01.595321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.301 07:21:01 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:02.301 07:21:01 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:38:02.301 07:21:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:02.301 07:21:01 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.301 07:21:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:02.301 [2024-10-16 07:21:01.783941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.562 null0 00:38:02.562 [2024-10-16 07:21:01.815999] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:02.562 [2024-10-16 07:21:01.816347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:02.562 07:21:01 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.562 07:21:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:02.562 963268536 00:38:02.562 07:21:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:02.562 950444706 00:38:02.562 07:21:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3457593 00:38:02.562 07:21:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3457593 /var/tmp/bperf.sock 00:38:02.562 07:21:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:02.562 07:21:01 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3457593 ']' 00:38:02.562 07:21:01 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:02.562 07:21:01 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:02.562 07:21:01 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:02.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:02.562 07:21:01 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:02.562 07:21:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:02.562 [2024-10-16 07:21:01.895750] Starting SPDK v25.01-pre git sha1 70fd76b04 / DPDK 24.03.0 initialization... 00:38:02.562 [2024-10-16 07:21:01.895803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457593 ] 00:38:02.562 [2024-10-16 07:21:01.971082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.562 [2024-10-16 07:21:02.000673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.503 07:21:02 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:03.503 07:21:02 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:38:03.503 07:21:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:03.503 07:21:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:03.503 07:21:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:03.503 07:21:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:03.763 07:21:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:03.763 07:21:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:03.763 [2024-10-16 07:21:03.228555] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:04.025 nvme0n1 00:38:04.025 07:21:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:04.025 07:21:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:04.025 07:21:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:04.025 07:21:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:04.025 07:21:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:04.025 07:21:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.025 07:21:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:04.025 07:21:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:04.025 07:21:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:04.025 07:21:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:04.025 07:21:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:04.025 07:21:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.025 07:21:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:04.286 07:21:03 keyring_linux -- keyring/linux.sh@25 -- # sn=963268536 00:38:04.286 07:21:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:04.286 07:21:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:04.286 07:21:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 963268536 == \9\6\3\2\6\8\5\3\6 ]] 00:38:04.286 07:21:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 963268536 00:38:04.286 07:21:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:04.286 07:21:03 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:04.286 Running I/O for 1 seconds... 00:38:05.671 24479.00 IOPS, 95.62 MiB/s 00:38:05.671 Latency(us) 00:38:05.671 [2024-10-16T05:21:05.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.671 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:05.671 nvme0n1 : 1.01 24479.07 95.62 0.00 0.00 5213.36 4314.45 12724.91 00:38:05.671 [2024-10-16T05:21:05.170Z] =================================================================================================================== 00:38:05.671 [2024-10-16T05:21:05.170Z] Total : 24479.07 95.62 0.00 0.00 5213.36 4314.45 12724.91 00:38:05.671 { 00:38:05.671 "results": [ 00:38:05.671 { 00:38:05.671 "job": "nvme0n1", 00:38:05.671 "core_mask": "0x2", 00:38:05.671 "workload": "randread", 00:38:05.671 "status": "finished", 00:38:05.671 "queue_depth": 128, 00:38:05.671 "io_size": 4096, 00:38:05.671 "runtime": 1.005226, 00:38:05.671 "iops": 24479.072367805846, 00:38:05.671 "mibps": 95.62137643674158, 00:38:05.671 "io_failed": 0, 00:38:05.671 "io_timeout": 0, 00:38:05.671 "avg_latency_us": 5213.355575242817, 00:38:05.671 "min_latency_us": 4314.453333333333, 00:38:05.671 "max_latency_us": 12724.906666666666 00:38:05.671 } 00:38:05.671 ], 00:38:05.671 "core_count": 1 00:38:05.671 } 00:38:05.671 07:21:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:05.671 07:21:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:05.671 07:21:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:05.671 07:21:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:05.671 07:21:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:05.671 07:21:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:05.671 07:21:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:05.671 07:21:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.671 07:21:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:05.671 07:21:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:05.671 07:21:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:05.671 07:21:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:05.933 07:21:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:05.933 [2024-10-16 07:21:05.335386] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:05.933 [2024-10-16 07:21:05.336040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x778a70 (107): Transport endpoint is not connected 00:38:05.933 [2024-10-16 07:21:05.337036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x778a70 (9): Bad file descriptor 00:38:05.933 [2024-10-16 07:21:05.338039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:05.933 [2024-10-16 07:21:05.338048] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:05.933 [2024-10-16 07:21:05.338054] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:05.933 [2024-10-16 07:21:05.338061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:05.933 request: 00:38:05.933 { 00:38:05.933 "name": "nvme0", 00:38:05.933 "trtype": "tcp", 00:38:05.933 "traddr": "127.0.0.1", 00:38:05.933 "adrfam": "ipv4", 00:38:05.933 "trsvcid": "4420", 00:38:05.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:05.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:05.933 "prchk_reftag": false, 00:38:05.933 "prchk_guard": false, 00:38:05.933 "hdgst": false, 00:38:05.933 "ddgst": false, 00:38:05.933 "psk": ":spdk-test:key1", 00:38:05.933 "allow_unrecognized_csi": false, 00:38:05.933 "method": "bdev_nvme_attach_controller", 00:38:05.933 "req_id": 1 00:38:05.933 } 00:38:05.933 Got JSON-RPC error response 00:38:05.933 response: 00:38:05.933 { 00:38:05.933 "code": -5, 00:38:05.933 "message": "Input/output error" 00:38:05.933 } 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@33 -- # sn=963268536 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 963268536 00:38:05.933 1 links removed 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@33 -- # sn=950444706 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 950444706 00:38:05.933 1 links removed 00:38:05.933 07:21:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3457593 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3457593 ']' 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3457593 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:05.933 07:21:05 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3457593 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3457593' 00:38:06.193 killing process with pid 3457593 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@969 -- # kill 3457593 00:38:06.193 Received shutdown signal, test time was about 1.000000 seconds 00:38:06.193 00:38:06.193 Latency(us) 00:38:06.193 [2024-10-16T05:21:05.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.193 [2024-10-16T05:21:05.692Z] =================================================================================================================== 00:38:06.193 [2024-10-16T05:21:05.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@974 -- # wait 3457593 00:38:06.193 07:21:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3457392 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3457392 ']' 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3457392 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3457392 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:06.193 07:21:05 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3457392' 00:38:06.194 killing process with pid 3457392 00:38:06.194 07:21:05 keyring_linux -- common/autotest_common.sh@969 -- # kill 3457392 00:38:06.194 07:21:05 keyring_linux -- common/autotest_common.sh@974 -- # wait 3457392 00:38:06.455 00:38:06.455 real 0m4.691s 00:38:06.455 user 0m9.100s 00:38:06.455 sys 0m1.354s 00:38:06.455 07:21:05 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:06.455 07:21:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:06.455 ************************************ 00:38:06.455 END TEST keyring_linux 00:38:06.455 ************************************ 00:38:06.455 07:21:05 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:06.455 07:21:05 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:06.455 07:21:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:06.455 07:21:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:06.455 07:21:05 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:06.455 07:21:05 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:06.455 07:21:05 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:06.455 07:21:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:06.455 07:21:05 -- common/autotest_common.sh@10 -- # set +x 00:38:06.455 07:21:05 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:06.455 07:21:05 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:38:06.455 07:21:05 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:38:06.455 07:21:05 -- common/autotest_common.sh@10 -- # set +x 00:38:14.605 INFO: APP EXITING 00:38:14.605 INFO: killing all VMs 00:38:14.605 INFO: killing vhost app 00:38:14.605 INFO: EXIT DONE 00:38:17.921 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:17.921 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:17.921 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:17.921 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:17.921 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:17.921 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:17.921 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:17.921 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:17.921 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:17.921 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:18.181 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:18.181 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:18.181 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:18.181 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:18.181 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:18.181 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:18.181 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:22.387 Cleaning 00:38:22.387 Removing: /var/run/dpdk/spdk0/config 00:38:22.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:22.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:22.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:22.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:22.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:22.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:22.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:22.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:22.387 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:22.388 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:22.388 Removing: /var/run/dpdk/spdk1/config 00:38:22.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:22.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:22.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:22.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:22.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:22.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:22.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:22.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:22.388 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:22.388 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:22.388 Removing: /var/run/dpdk/spdk2/config 00:38:22.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:22.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:22.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:22.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:22.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:22.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:22.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:22.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:22.388 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:22.388 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:22.388 Removing: /var/run/dpdk/spdk3/config 00:38:22.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:22.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:22.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:22.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:22.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:22.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:22.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:22.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:22.388 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:22.388 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:22.388 Removing: /var/run/dpdk/spdk4/config 00:38:22.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:22.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:22.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:22.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:22.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:22.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:22.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:22.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:22.388 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:22.388 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:22.388 Removing: /dev/shm/bdev_svc_trace.1 00:38:22.388 Removing: /dev/shm/nvmf_trace.0 00:38:22.388 Removing: /dev/shm/spdk_tgt_trace.pid2886213 00:38:22.388 Removing: /var/run/dpdk/spdk0 00:38:22.388 Removing: /var/run/dpdk/spdk1 00:38:22.388 Removing: /var/run/dpdk/spdk2 00:38:22.388 Removing: /var/run/dpdk/spdk3 00:38:22.388 Removing: /var/run/dpdk/spdk4 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2884728 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2886213 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2887058 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2888100 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2888442 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2889507 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2889515 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2889972 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2891109 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2891588 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2891979 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2892376 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2892791 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2893192 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2893545 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2893749 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2894017 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2895353 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2898664 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2899093 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2899455 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2899684 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2900106 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2900469 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2900885 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2900968 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2901280 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2901595 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2901718 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2901974 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2902807 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2903229 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2903637 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2908157 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2913538 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2925561 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2926249 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2931636 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2931996 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2937144 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2944358 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2947567 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2960677 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2971731 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2973748 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2974791 00:38:22.388 Removing: /var/run/dpdk/spdk_pid2995751 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3000505 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3057420 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3063975 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3071485 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3078903 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3078979 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3079999 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3081029 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3082048 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3082641 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3082722 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3082958 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3083064 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3083075 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3084079 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3085083 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3086090 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3086761 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3086763 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3087101 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3088206 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3089601 00:38:22.388 Removing: /var/run/dpdk/spdk_pid3099357 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3134067 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3139472 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3141474 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3143666 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3143852 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3144170 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3144370 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3144932 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3147257 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3148562 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3149151 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3152253 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3153025 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3153742 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3158807 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3165503 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3165504 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3165505 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3170197 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3180156 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3184952 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3192477 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3193974 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3195507 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3197333 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3203600 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3208523 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3217736 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3217818 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3222872 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3223191 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3223392 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3223880 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3223885 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3229396 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3230094 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3235417 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3238632 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3245025 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3251579 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3262363 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3271043 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3271068 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3294142 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3294891 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3295577 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3296268 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3297327 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3298015 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3298697 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3299498 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3304716 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3304982 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3312699 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3313001 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3319542 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3324575 00:38:22.649 Removing: /var/run/dpdk/spdk_pid3336202 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3336869 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3341926 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3342294 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3347324 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3354163 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3357218 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3369978 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3380570 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3382541 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3383592 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3403161 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3407866 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3411175 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3419385 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3419391 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3425272 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3427668 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3429978 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3431190 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3433693 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3435049 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3445166 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3445642 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3446221 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3449120 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3449788 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3450244 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3454991 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3455030 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3456842 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3457392 00:38:22.911 Removing: /var/run/dpdk/spdk_pid3457593 00:38:22.911 Clean 00:38:22.911 07:21:22 -- common/autotest_common.sh@1451 -- # return 0 00:38:22.911 07:21:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:22.911 07:21:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:22.911 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:38:23.172 07:21:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:23.172 07:21:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:23.172 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:38:23.172 07:21:22 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:23.172 07:21:22 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:23.172 07:21:22 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:23.172 07:21:22 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:23.172 07:21:22 -- spdk/autotest.sh@394 -- # hostname 00:38:23.173 07:21:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:23.433 geninfo: WARNING: invalid characters removed from testname! 00:38:50.013 07:21:47 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:51.395 07:21:50 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:53.939 07:21:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:55.322 07:21:54 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:57.235 07:21:56 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:58.616 07:21:57 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:00.528 07:21:59 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:00.528 07:21:59 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:39:00.528 07:21:59 -- common/autotest_common.sh@1691 -- $ lcov --version 00:39:00.528 07:21:59 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:39:00.528 07:21:59 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:39:00.528 07:21:59 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:39:00.528 07:21:59 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:39:00.528 07:21:59 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:39:00.528 07:21:59 -- scripts/common.sh@336 -- $ IFS=.-: 00:39:00.528 07:21:59 -- scripts/common.sh@336 -- $ read -ra ver1 00:39:00.528 07:21:59 -- scripts/common.sh@337 -- $ IFS=.-: 00:39:00.528 07:21:59 -- scripts/common.sh@337 -- $ read -ra ver2 00:39:00.528 07:21:59 -- scripts/common.sh@338 -- $ local 'op=<' 00:39:00.528 07:21:59 -- scripts/common.sh@340 -- $ ver1_l=2 00:39:00.528 07:21:59 -- scripts/common.sh@341 -- $ ver2_l=1 00:39:00.528 07:21:59 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:39:00.528 07:21:59 -- scripts/common.sh@344 -- $ case "$op" in 00:39:00.528 07:21:59 -- scripts/common.sh@345 -- $ : 1 00:39:00.528 07:21:59 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:39:00.528 07:21:59 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.528 07:21:59 -- scripts/common.sh@365 -- $ decimal 1 00:39:00.528 07:21:59 -- scripts/common.sh@353 -- $ local d=1 00:39:00.528 07:21:59 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:39:00.528 07:21:59 -- scripts/common.sh@355 -- $ echo 1 00:39:00.528 07:21:59 -- scripts/common.sh@365 -- $ ver1[v]=1 00:39:00.528 07:21:59 -- scripts/common.sh@366 -- $ decimal 2 00:39:00.528 07:21:59 -- scripts/common.sh@353 -- $ local d=2 00:39:00.528 07:21:59 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:39:00.528 07:21:59 -- scripts/common.sh@355 -- $ echo 2 00:39:00.528 07:21:59 -- scripts/common.sh@366 -- $ ver2[v]=2 00:39:00.528 07:21:59 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:39:00.528 07:21:59 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:39:00.528 07:21:59 -- scripts/common.sh@368 -- $ return 0 00:39:00.528 07:21:59 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.528 07:21:59 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:39:00.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.528 --rc genhtml_branch_coverage=1 00:39:00.528 --rc genhtml_function_coverage=1 00:39:00.528 --rc genhtml_legend=1 00:39:00.528 --rc geninfo_all_blocks=1 00:39:00.528 --rc geninfo_unexecuted_blocks=1 00:39:00.528 00:39:00.528 ' 00:39:00.528 07:21:59 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:39:00.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.529 --rc genhtml_branch_coverage=1 00:39:00.529 --rc genhtml_function_coverage=1 00:39:00.529 --rc genhtml_legend=1 00:39:00.529 --rc geninfo_all_blocks=1 00:39:00.529 --rc geninfo_unexecuted_blocks=1 00:39:00.529 00:39:00.529 ' 00:39:00.529 07:21:59 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:39:00.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.529 --rc genhtml_branch_coverage=1 00:39:00.529 --rc genhtml_function_coverage=1 00:39:00.529 --rc genhtml_legend=1 00:39:00.529 --rc geninfo_all_blocks=1 00:39:00.529 --rc geninfo_unexecuted_blocks=1 00:39:00.529 00:39:00.529 ' 00:39:00.529 07:21:59 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:39:00.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.529 --rc genhtml_branch_coverage=1 00:39:00.529 --rc genhtml_function_coverage=1 00:39:00.529 --rc genhtml_legend=1 00:39:00.529 --rc geninfo_all_blocks=1 00:39:00.529 --rc geninfo_unexecuted_blocks=1 00:39:00.529 00:39:00.529 ' 00:39:00.529 07:21:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.529 07:21:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:39:00.529 07:21:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:00.529 07:21:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.529 07:21:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.529 07:21:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.529 07:21:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.529 07:21:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.529 07:21:59 -- paths/export.sh@5 -- $ export PATH 00:39:00.529 07:21:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.529 07:21:59 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:00.529 07:21:59 -- common/autobuild_common.sh@486 -- $ date +%s 00:39:00.529 07:21:59 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729056119.XXXXXX 00:39:00.529 07:21:59 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729056119.Roe4vh 00:39:00.529 07:21:59 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:39:00.529 07:21:59 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:39:00.529 07:21:59 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:39:00.529 07:21:59 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:00.529 07:21:59 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:00.529 07:21:59 -- common/autobuild_common.sh@502 -- $ get_config_params 00:39:00.529 07:21:59 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:39:00.529 07:21:59 -- common/autotest_common.sh@10 -- $ set +x 00:39:00.529 07:21:59 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:39:00.529 07:21:59 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:39:00.529 07:21:59 -- pm/common@17 -- $ local monitor 00:39:00.529 07:21:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:00.529 07:21:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:00.529 07:21:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:00.529 07:21:59 -- pm/common@21 -- $ date +%s 00:39:00.529 07:21:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:00.529 07:21:59 -- pm/common@25 -- $ sleep 1 00:39:00.529 07:21:59 -- pm/common@21 -- $ date +%s 00:39:00.529 07:21:59 -- pm/common@21 -- $ date +%s 00:39:00.529 07:21:59 -- pm/common@21 -- $ date +%s 00:39:00.529 07:21:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729056119 00:39:00.529 07:21:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729056119 00:39:00.529 07:21:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729056119 00:39:00.529 07:21:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729056119 00:39:00.529 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729056119_collect-cpu-load.pm.log 00:39:00.529 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729056119_collect-vmstat.pm.log 00:39:00.529 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729056119_collect-cpu-temp.pm.log 00:39:00.529 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729056119_collect-bmc-pm.bmc.pm.log 00:39:01.472 07:22:00 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:39:01.472 07:22:00 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:39:01.472 07:22:00 -- spdk/autopackage.sh@14 -- $ timing_finish 00:39:01.472 07:22:00 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:01.472 07:22:00 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:01.472 07:22:00 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:01.472 07:22:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:01.472 07:22:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:01.472 07:22:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:01.472 07:22:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:01.473 07:22:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:01.473 07:22:00 -- pm/common@44 -- $ pid=3471191 00:39:01.473 07:22:00 -- pm/common@50 -- $ kill -TERM 3471191 00:39:01.473 07:22:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:01.473 07:22:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:01.473 07:22:00 -- pm/common@44 -- $ pid=3471192 00:39:01.473 07:22:00 -- pm/common@50 -- $ kill -TERM 3471192 00:39:01.473 07:22:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:01.473 07:22:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:01.473 07:22:00 -- pm/common@44 -- $ pid=3471194 00:39:01.473 07:22:00 -- pm/common@50 -- $ kill -TERM 3471194 00:39:01.473 07:22:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:01.473 07:22:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:01.473 07:22:00 -- pm/common@44 -- $ pid=3471219 00:39:01.473 07:22:00 -- pm/common@50 -- $ sudo -E kill -TERM 3471219 00:39:01.473 + [[ -n 2799328 ]] 00:39:01.473 + sudo kill 2799328 00:39:01.484 [Pipeline] } 00:39:01.499 [Pipeline] // stage 00:39:01.505 [Pipeline] } 00:39:01.519 [Pipeline] // timeout 00:39:01.525 [Pipeline] } 00:39:01.539 [Pipeline] // catchError 00:39:01.543 [Pipeline] } 00:39:01.559 [Pipeline] // wrap 00:39:01.565 [Pipeline] } 00:39:01.580 [Pipeline] // catchError 00:39:01.589 [Pipeline] stage 00:39:01.592 [Pipeline] { (Epilogue) 00:39:01.606 [Pipeline] catchError 00:39:01.608 [Pipeline] { 00:39:01.622 [Pipeline] echo 00:39:01.625 Cleanup processes 00:39:01.632 [Pipeline] sh 00:39:01.924 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:01.924 3471354 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:39:01.924 3471889 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:01.940 [Pipeline] sh 00:39:02.282 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:02.282 ++ grep -v 'sudo pgrep' 00:39:02.282 ++ awk '{print $1}' 00:39:02.282 + sudo kill -9 3471354 00:39:02.357 [Pipeline] sh 00:39:02.642 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:14.881 [Pipeline] sh 00:39:15.169 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:15.169 Artifacts sizes are good 00:39:15.186 [Pipeline] archiveArtifacts 00:39:15.193 Archiving artifacts 00:39:15.362 [Pipeline] sh 00:39:15.660 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:15.676 [Pipeline] cleanWs 00:39:15.686 [WS-CLEANUP] Deleting project workspace... 00:39:15.686 [WS-CLEANUP] Deferred wipeout is used... 00:39:15.694 [WS-CLEANUP] done 00:39:15.696 [Pipeline] } 00:39:15.712 [Pipeline] // catchError 00:39:15.724 [Pipeline] sh 00:39:16.010 + logger -p user.info -t JENKINS-CI 00:39:16.021 [Pipeline] } 00:39:16.036 [Pipeline] // stage 00:39:16.041 [Pipeline] } 00:39:16.056 [Pipeline] // node 00:39:16.062 [Pipeline] End of Pipeline 00:39:16.106 Finished: SUCCESS